This is a simple application to be used in the technical test of DevOps. The application can be accessed at https://stevencode.dev/api/
- Python 3.13
- Docker and Docker Compose
- Terraform (for infrastructure deployment)
- Pipenv (for virtual environment management)
- Git
Clone this repo.
git clone https://github.com/StevenMartinez94/devsu-test/
cd devsu-testInstall dependencies.
pip install -r requirements.txtClone this repo.
git clone https://github.com/StevenMartinez94/devsu-test/
cd devsu-testInstall Pipenv if you haven't already:
pip install pipenvInstall dependencies and create virtual environment:
pipenv installActivate the virtual environment:
pipenv shellClone this repo.
git clone https://github.com/StevenMartinez94/devsu-test/
cd demo-devops-pythonBuild and run with Docker Compose:
docker-compose up --buildThe database is generated as a file in the main path when the project is first run, and its name is db.sqlite3. Consider giving access permissions to the file for proper functioning.
Migrate database (not if using Docker):
python manage.py makemigrations
python manage.py migrateTo run tests you can use this command.
python manage.py testTo run locally the project you can use this command.
python manage.py runserver# Run tests
pipenv run python manage.py test
# Run the development server
pipenv run python manage.py runserverThe application will be automatically available at http://localhost:8000 when using Docker Compose. Open http://localhost:8000/api/ with your browser to see the result.
Why on-premises? The main reason for choosing on-premises deployment here is to demonstrate the ability to set up a Kubernetes cluster on a bare-metal server, which is a common scenario in DevOps practices. Also, on-premises deployment offers greater control over the infrastructure and data, which can be crucial for certain applications and compliance requirements.
Why Contabo? Contabo provides affordable cloud infrastructure with reliable performance, making it suitable for deploying small applications. For production, I would rather consider using a more robust provider like AWS or GCP.
What else I would need to deploy a project like this? You would need a domain name and to apply some k8s manifest as:
# Install cert-manager for automated TLS certificates
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml
# Install MetalLB for bare-metal load balancing
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml
# Install ingress-nginx for HTTP ingress routing
kubectl apply -f https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/baremetal/deploy.yamlThis project includes Docker support for containerized deployment.
- Dockerfile: Multi-stage build with Python 3.13-slim base image
- docker-compose.yml: Local development orchestration
- Security: Runs as non-root user (apiuser:1000)
- Health Checks: Built-in health monitoring at
/apiendpoint - Logging: JSON file driver with rotation (10MB max, 3 files)
# Build the image
docker build -t restful-api .
# Run with Docker Compose
docker-compose up -d
# View logs
docker-compose logs -f restful-api
# Stop services
docker-compose down
# Rebuild and restart
docker-compose up --build --force-recreateThe project includes Terraform configuration for deploying Kubernetes infrastructure on Contabo cloud provider.
- 3 Contabo instances: 1 master + 2 worker nodes
- Kubernetes cluster: Ready for application deployment
cd terraform
# Initialize Terraform
terraform init
# Plan the deployment
terraform plan
# Apply the infrastructure
terraform apply
# View outputs (IPs, instance IDs)
terraform output
# Destroy infrastructure
terraform destroyCreate a terraform.tfvars file with:
oauth2_client_id = "your-contabo-client-id"
oauth2_client_secret = "your-contabo-client-secret"
oauth2_user = "your-contabo-user"
oauth2_pass = "your-contabo-password"
product_id = "your-product-id"
region = "your-region"
image_id = "your-image-id"The k8s/manifests/ directory contains Kubernetes deployment files:
- namespace.yaml: Application namespace
- deployment.yaml: Application deployment configuration
- service.yaml: Service exposure
- ingress.yaml: Ingress rules and certificate setup
- pv.yaml & pvc.yaml: Persistent volume configuration
NOTE: You may need to create a Kubernetes secret for the Django secret key inside the demo-app namespace, you can do it using the following command:
kubectl create secret generic demo-app-secrets \
--namespace=demo-app \
--from-literal=django-secret-key='your-django-secret-key'The project implements a comprehensive CI/CD pipeline using GitHub Actions with 6 automated workflows that ensure code quality, security, and reliable deployments.
The CI/CD pipeline follows a sequential execution pattern:
Lint → Build & Test → Docker Build → Vulnerability Scan & Code Coverage → Deploy
The project uses six GitHub Actions workflows for automated quality, security, and deployment:
- Lint: Checks Python, YAML, and Terraform formatting.
- Build & Test: Runs Django tests after linting, using secrets for configuration.
- Docker Build & Push: Builds and pushes Docker images to GHCR after tests pass.
- Vulnerability Scan: Scans built images for security issues using Trivy.
- Code Coverage: Measures and reports test coverage.
- Deploy: Deploys to Kubernetes using updated manifests and secrets.
To enable full CI/CD functionality, configure these secrets in your repository:
# Django Application
DJANGO_SECRET_KEY=your-secure-django-secret-key
# Kubernetes Deployment
KUBECONFIG_B64=base64-encoded-kubeconfig-contentDjango Secret Key:
python -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"Kubeconfig (Base64):
# Encode your kubeconfig file
cat ~/.kube/config | base64 -w 0Security Features:
- Container image scanning for OS and application vulnerabilities
- SARIF security reporting format
Quality Assurance:
- Multi-language linting (Python, YAML, Terraform)
- Automated testing
- Code coverage reporting
Deployment Features:
- Tag-based versioning
- Automatic image tagging
- Rolling deployments to Kubernetes
- Multi-environment support
Container Registry:
- GitHub Container Registry (GHCR) integration
- Automatic image cleanup and tagging
- Multi-architecture support ready
Developers can run similar checks locally:
# Run linting checks
black --check .
yamllint .
terraform fmt -check -recursive
# Run tests
python manage.py test api
# Build Docker image locally
docker build -t restful-api:local .Create a .env file for local development:
DJANGO_SECRET_KEY=your-django-secret-key
DATABASE_NAME=db.sqlite3Logs are configured with rotation to prevent disk space issues:
logging:
driver: json-file
options:
max-size: 10m
max-file: "3"The application includes health monitoring:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/api"]
interval: 30s
timeout: 10s
retries: 5These services can perform:
To create a user, the endpoint /api/users/ must be consumed with the following parameters:
Method: POST{
"dni": "dni",
"name": "name"
}If the response is successful, the service will return an HTTP Status 200 and a message with the following structure:
{
"id": 1,
"dni": "dni",
"name": "name"
}If the response is unsuccessful, we will receive status 400 and the following message:
{
"detail": "error"
}To get all users, the endpoint /api/users must be consumed with the following parameters:
Method: GETIf the response is successful, the service will return an HTTP Status 200 and a message with the following structure:
[
{
"id": 1,
"dni": "dni",
"name": "name"
}
]To get an user, the endpoint /api/users/ must be consumed with the following parameters:
Method: GETIf the response is successful, the service will return an HTTP Status 200 and a message with the following structure:
{
"id": 1,
"dni": "dni",
"name": "name"
}If the user id does not exist, we will receive status 404 and the following message:
{
"detail": "Not found."
}Copyright © 2023 Devsu. All rights reserved.
