This is a simple Python Flask application that performs CRUD operations on a MySQL database, the project contains scripts and Kubernetes manifests for deploying the Python application with Mysql Database on an AWS EKS cluster and RDS with an accompanying ECR repositories. The deployment includes setting up monitoring with Prometheus and Grafana, and a CI/CD pipeline.
Before you begin, ensure you have met the following requirements:
- Python
- MySQL For Ubuntu | MySQL For Windows
- AWS CLI configured with appropriate permissions
- Docker installed and configured
- kubectl installed and configured to interact with your Kubernetes cluster
- Terraform
- Helm
- GitHub_CLI
- K9s
- Beekeeper-Studio
For Database Access
-
Start your MySQL server.
-
Create a new MySQL database for the application.
-
Update the database configuration in
app.py
to match your local MySQL settings:- DB_HOST: localhost
- DB_USER: your MySQL username
- DB_PASSWORD: your MySQL password
- DB_DATABASE: your database name
-
Create a table in the database that will be used by your application
CREATE TABLE tasks ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, description TEXT, is_complete BOOLEAN DEFAULT false );
-
Clone the repository:
git clone https://github.com/johnbedeir/End-to-end-DevOps-Python-MySQL cd End-to-end-DevOps-Python-MySQL/todo-app
-
Create a virtual environment and activate it:
python3 -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the required dependencies:
pip3 install -r requirements.txt
-
Start the Flask application:
python3 app.py
-
Access the application at
http://localhost:5000
.
-
Build the Docker image:
docker build -t my-flask-app .
-
Run the Docker container with host network (to access the local MySQL server):
docker run --network=host my-flask-app
-
Access the application at
http://localhost:5000
.
To run the application using docker compose:
docker-compose up
This will Run both the application and the database containers and will also create a table in the database using the sql script init-db.sql
To take it down run the following command:
docker-compose down
To build and deploy the application on AWS EKS and RDS execute the following script:
./build.sh
This will build the infrastructure, Deploy Monitoring Tools, and run some commands:
- EKS (Kubernetes Cluster)
- 2x ECR (Elastic Container Registry)
One for the App Image and one for the DB K8s Job
- RDS (Relational Database Service)
RDS Cluster with One Instance
- Generate and store RDS credentials into AWS Secret Manager
- VPC, Subnets and Network Configuration
- Monitoring Tools Deployment (Alert Manager, Prometheus, Grafana)
- Build and Push the Dockerfile for the Application and MySQL Kubernetes Job to the ECR
- Create Kubernetes Secrets with the RDS Credentials
- Create Namespace and Deploy the application and the Job
- Reveal the LoadBalancer URL for the application, alertmanager, prometheus and grafana
IMPORTANT: Make sure to update the Variables in the script
Once the build.sh
script is executed, you should see the URLs for all the deployed applications:
NOTE: For the alert-manager
use the port 9093
after the URL, and for prometheus
use port 9090
.
The application should look like this:
And once you add data should look like this:
You can Complete
or Delete
the task and this will take effect automatically in the database.
You will also get the database endpoint URL:
Use that URL and access the database using the following comand:
mysql -h RDS_ENDPOINT_URL -P 3306 -u root -p
NOTE: Once you run the command you will be asked for the database password, you have 2 methods to get the database password:
Open K9s
through the terminal, Ctrl+A
to navicate to the main screen /secrets
to search for the k8s secrets and inside the secrets /rds
to search for names that include rds
, you should get the following:
Over rds-password
tab x
so you can decode the encrypted password
Through AWS, navigate to Secret Manager
on AWS
Click on the created secret rds-cluster-secret
and from the Overview tab click on Retrieve secret value
This will show you the username and the generated password for the database.
Once you connected to the database throug the terminal you can run the following commands to check the data into the database:
show databases;
use DATABASE_NAME;
show tables;
select * from TABLE_NAME;
Choose the database MYSQL
, fill in the Host
, User
, Password
, make sue the port is 3306
then connect.
This project is equipped with GitHub Actions workflows to automate the Continuous Integration (CI) and Continuous Deployment (CD) processes.
The CI workflow is triggered on pushes to the main
branch. It performs the following tasks:
- Checks out the code from the repository.
- Configures AWS credentials using secrets stored in the GitHub repository.
- Logs in to Amazon ECR.
- Builds the Docker image for the Python app.
- Builds the Docker image for MySQL Kubernetes job.
- Tags the images and pushes each one to the it's Amazon ECR repository.
The CD workflow is triggered upon the successful completion of the CI workflow. It performs the following tasks:
- Checks out the code from the repository.
- Configures AWS credentials using secrets stored in the GitHub repository.
- Sets up
kubectl
with the required Kubernetes version. - Deploys the Kubernetes manifests found in the
k8s
directory to the EKS cluster.
The following secrets need to be set in your GitHub repository for the workflows to function correctly:
AWS_ACCESS_KEY_ID
: Your AWS Access Key ID.AWS_SECRET_ACCESS_KEY
: Your AWS Secret Access Key.KUBECONFIG_SECRET
: Your Kubernetes config file encoded in base64.
Before using the GitHub Actions workflows, you need to set up the AWS credentials as secrets in your GitHub repository. The included github_secrets.sh
script automates the process of adding your AWS credentials to GitHub Secrets, which are then used by the workflows. To use this script:
-
Ensure you have the GitHub CLI (
gh
) installed and authenticated. -
Run the script with the following command:
./github_secrets.sh
This script will:
- Extract your AWS Access Key ID and Secret Access Key from your local AWS configuration.
- Use the GitHub CLI to set these as secrets in your GitHub repository.
Note: It's crucial to handle AWS credentials securely. The provided script is for demonstration purposes, and in a production environment, you should use a secure method to inject these credentials into your CI/CD pipeline.
These secrets are consumed by the GitHub Actions workflows to access your AWS resources and manage your Kubernetes cluster.
For the Continuous Deployment workflow to function properly, it requires access to your Kubernetes cluster. This access is granted through the KUBECONFIG
file. You need to add this file manually to your GitHub repository's secrets to ensure secure and proper deployment.
To add your KUBECONFIG
to GitHub Secrets, follow these steps:
-
Encode your
KUBECONFIG
file to a base64 string:cat ~/.kube/config | base64
-
Copy the encoded output to your clipboard.
-
Navigate to your GitHub repository on the web.
-
Go to
Settings
>Secrets
>New repository secret
. -
Name the secret
KUBECONFIG_SECRET
. -
Paste the base64-encoded
KUBECONFIG
data into the secret's value field. -
Click
Add secret
to save the new secret.
This KUBECONFIG_SECRET
is then used by the CD workflow to authenticate with your Kubernetes cluster and apply the required configurations.
Important: Be cautious with your KUBECONFIG
data as it provides administrative access to your Kubernetes cluster. Only store it in secure locations, and never expose it in logs or to unauthorized users.
In case you need to tear down the infrastructure and services that you have deployed, a script named destroy.sh
is provided in the repository. This script will:
- Log in to Amazon ECR.
- Delete the specified Docker image from the ECR repository.
- Delete the Kubernetes deployment and associated resources.
- Delete the Kubernetes namespace.
- Destroy the AWS resources created by Terraform.
-
Open the
destroy.sh
script. -
Ensure that the variables at the top of the script match your AWS and Kubernetes settings:
$1="ECR_REPOSITORY_NAME" $2="REGION"
-
Save the script and make it executable:
chmod +x destroy.sh
-
Run the script:
./destroy.sh
This script will execute another script ecr-img-delete.sh
which will delete all the images on the two ECR
to make sure the both ECR
are empty then terraform
commands to destroy all resources related to your deployment.
Once the terraform destroy
starts the RDS
will start creating a Snapshot
as a backup for the database, in that case the process of destroying will fail at some point.
The script will delete the created Snapshot
then run terraform destroy
again to make sure all resources are deleted.
aws rds delete-db-cluster-snapshot --db-cluster-snapshot-identifier $rds_snapshot_name --region $region
It is essential to verify that the script has completed successfully to ensure that all resources have been cleaned up and no unexpected costs are incurred.
Make sure to replace URLs, database configuration details, and any other specific instructions to fit your project. This README provides a basic guideline for users to set up and run your application both locally, with Docker, with Docker Compose, minikube and also over EKS and RDS.