Project Focus: This project demonstrates infrastructure as code using Terraform and AWS cloud services, specifically focused on deploying a microservices architecture to AWS EKS with a complete GitLab CI/CD pipeline. The primary goal is to showcase cloud infrastructure design, automated deployment, and continuous integration/delivery practices.
A cloud-native microservices architecture deployed on AWS EKS (Elastic Kubernetes Service), designed for high availability, scalability, and security, with fully automated CI/CD pipelines using self-hosted GitLab.
Diagram of AWS resources for the production environment.
- Quick Start Guide
- Project Overview
- Architecture
- Setup & Installation
- CI/CD Pipelines
- Usage & Testing
- Monitoring & Observability
- Maintenance & Operations
- Project Structure
- Future Enhancements
- License
For those who want to get started immediately:
-
Prerequisites: Ensure you have Docker, AWS CLI, Terraform v1.11.4+, Ansible, kubectl, Python3.13, and Git installed. Ensure your AWS user has the following permissions: AWS Permissions
-
Start GitLab:
cd gitlab cp .env.example .env # Edit .env file docker-compose up -d cat config/initial_root_password # Get root password
-
Setup Terraform state management in S3 and DynamoDB:
cp terraform.tfvars.example terraform.tfvars # Edit .terraform.tfvars cd terraform/bootstrap terraform init && terraform apply
-
Configure & Run Ansible:
cd ansible cp group_vars/all.yml.example group_vars/all.yml cp group_vars/vault.yml.example group_vars/vault.yml # Edit both files ansible-vault encrypt group_vars/vault.yml python3.13 -m venv ~/.ansible-venv source ~/.ansible-venv/bin/activate pip install ansible python-gitlab ansible-playbook -i inventory/localhost gitlab_setup.yml --ask-vault-pass
-
Run Pipelines: Allow infrastructure pipeline to complete before running service pipelines.
This project implements a cloud-native movie catalog service with three microservices:
-
API Gateway
- Entry point for all client requests
- Routes requests to appropriate backend services
- Swagger/OpenAPI documentation at
/api-docs
- Built with Node.js/Express
-
Inventory Service
- Manages movie catalog with CRUD operations
- PostgreSQL database for persistent storage
- RESTful API endpoints for interacting with the catalog
-
Billing Service
- Processes orders through a message queue system
- PostgreSQL database for order history
- Asynchronous processing using RabbitMQ
-
Multi-Environment Design
- Staging: Complete EKS cluster in its own VPC
- Production: Separate EKS cluster in its own VPC
- Each environment is completely isolated with identical architecture
-
AWS Services Used
- VPC: Custom VPC spanning multiple availability zones for each environment
- EKS: Managed Kubernetes with nodes in private subnets
- Load Balancing: Application Load Balancer with HTTPS support
- CloudWatch: Comprehensive monitoring dashboards
- ACM: Certificate management for HTTPS support
- S3/DynamoDB: Terraform state management
-
High Availability Design
- Multi-AZ Setup: Resources distributed across eu-north-1a and eu-north-1b
- Private/Public Subnet Separation: Enhanced security with proper gateway configuration
- Autoscaling: Horizontal Pod Autoscaling (HPA) based on CPU utilization
-
Stateless Services (API Gateway, Inventory App)
- Deployed as Kubernetes Deployments
- Configured with Horizontal Pod Autoscaler (HPA)
- Minimum of 1 replica, scaling up to 3 replicas based on 60% CPU utilization
- Topology spread constraints across availability zones
-
Stateful Components (Billing App, Billing Queue, Databases)
- Deployed as StatefulSets to preserve state and identity
- Persistent volume claims for data retention
- Single replica with backup strategies
- High Availability: Multi-AZ deployment with automatic failover
- Scalability: EKS auto-scaling with configurable node groups
- Security: Private subnets for application pods, public-only for ingress
- Disaster Recovery: AZ2 configured for disaster recovery and scaling
- HTTPS Support: Integrated with AWS Certificate Manager
- Automated Deployment: Complete CI/CD pipeline for code and infrastructure changes
- Self-hosted GitLab: Full control over the CI/CD environment with Docker-based setup
- Docker and Docker Compose
- AWS CLI
- Terraform v1.11.4+
- Ansible
- kubectl
- Python3.13
- Git
Before beginning the deployment, ensure your AWS user has the following IAM permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SetupIAM",
"Effect": "Allow",
"Action": [
"iam:GetUser",
"iam:CreatePolicy",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:AttachUserPolicy",
"iam:ListAttachedUserPolicies",
"iam:ListPolicyVersions",
"iam:DetachUserPolicy",
"iam:DeletePolicy",
"iam:DeletePolicyVersion",
"iam:CreatePolicyVersion",
"iam:UpdateAssumeRolePolicy",
"iam:ListAttachedGroupPolicies",
"iam:CreateGroup",
"iam:GetGroup",
"iam:DeleteGroup",
"iam:AddUserToGroup",
"iam:AttachGroupPolicy",
"iam:ListGroupsForUser",
"iam:DetachGroupPolicy",
"iam:RemoveUserFromGroup",
"iam:UpdateGroup",
"iam:ListEntitiesForPolicy",
"iam:ListPolicies"
],
"Resource": "*"
}
]
}
You can create this policy in the AWS Management Console or see the bootstrap/initial-user-policy.json
file for a ready-to-use policy document.
First, set up the self-hosted GitLab instance:
cd gitlab
cp .env.example .env
# Edit .env file to configure your environment variables
docker-compose up -d
Wait for GitLab to start (this may take a few minutes). Access the GitLab UI at http://192.168.56.10 (or your configured URL).
http://localhost
. It can cause issues with container communication.
Initial password can be obtained from config
folder:
cat config/initial_root_password
Configure your GitLab API token and other variables:
cd ansible
cp group_vars/all.yml.example group_vars/all.yml
cp group_vars/vault.yml.example group_vars/vault.yml
# Edit the .yml files to add your GitLab token and other variables
Be sure to encrypt the vault.yml
:
ansible-vault encrypt group_vars/vault.yml
While in /ansible
folder, create and activate the virtual environment:
python3.13 -m venv ~/.ansible-venv
source ~/.ansible-venv/bin/activate
Install required dependencies:
pip install ansible python-gitlab
Ensure your inventory/localhost
file contains:
[local]
localhost ansible_connection=local
Configure and fill tfstate
variables for backend:
cp terraform.tfvars.example terraform.tfvars
Initialize Terraform state backend:
cd terraform/bootstrap
terraform init
terraform apply
# Ensure the virtual environment is activated:
source ~/.ansible-venv/bin/activate
# Run the playbook:
ansible-playbook -i inventory/localhost gitlab_setup.yml --ask-vault-pass
This will:
- Create repositories for each service
- Configure CI/CD variables
- Set up webhooks and access permissions
- Initialize repositories with code and CI/CD configuration
CI/CD will automatically deploy changes when you push to the repositories.
The project uses two distinct pipeline types:
- Service Pipelines: For each microservice (API Gateway, Inventory, Billing)
- Infrastructure Pipeline: For managing the AWS infrastructure via Terraform
Go to Projects > Your Project > Build > Pipelines or directly access:
<gitlab-host>/dashboard/projects/personal
- Click on a specific pipeline icon to view its jobs and stages
- Each job displays real-time logs and status updates
- Manual Approval Steps:
- Identify jobs marked as manual and click to run the job
- Example: The
approval-prod
andapply-prod
jobs require manual approval after staging deployment completes
Each service pipeline includes the following stages:
- Build: Compiles code, installs dependencies via npm, verifies installation
- Test: Executes the test suite for the service
- Scan: Performs code quality and security analysis
- Containerize: Builds Docker images, tags them, and pushes to Docker Hub
- Deploy to Staging: Configures kubectl, creates Kubernetes secrets, deploys to staging
- Approval: Manual approval gate before proceeding to production
- Deploy to Production: Deploys to production EKS cluster using prod-specific variables
The infrastructure pipeline manages all AWS resources:
- Init: Initializes Terraform with proper backend configuration, generates tfvars files
- Validate: Syntax validation, format checking, security best practices verification
- Plan: Creates execution plan for staging environment
- Apply Staging: Applies the plan to staging, configures kubectl, creates Kubernetes resources
- Approval: Manual approval gate for production changes
- Apply Production: Plans and applies changes to production environment
When running, API documentation is available at /api-docs
endpoint of the API Gateway service.
Import the collections and environment from the postman/
directory to test the API:
- Import
code-keeper.postman_collection.json
- Import
code-keeper.postman_environment.json
- Update the environment variables with your deployment details
The following image shows the running Kubernetes services and pods in staging and production environments:
Successful API test results from Postman showing the Movie CRUD operations working against the AWS ingress endpoint:
The project includes a comprehensive CloudWatch dashboard that provides visibility into cluster performance:
Key metrics available:
- Overall Cluster Metrics: Pod and node-level CPU and memory utilization
- Namespace Monitoring: Metrics for default and kube-system namespaces
- Application Performance: CPU and memory tracking for each microservice
- Infrastructure Monitoring: Metrics for databases and message queue
- Resource Optimization: Resource utilization against defined limits
- Container Insights: Deep visibility into container performance
The dashboard provides both high-level overview panels and detailed component-specific metrics, enabling quick identification of performance bottlenecks or potential issues.
To clean up all resources when you're done:
-
Remove application resources: Done via Infrastructure pipeline. Run
cleanup-prod
and thencleanup-staging
.
Pipeline cleans up all Kubernetes resources before usingterraform destroy
. -
Destroy S3 Terraform State backend:
cd terraform/bootstrap terraform destroy
-
Shut down GitLab:
cd gitlab docker-compose down
code-keeper/
├── ansible/ # Ansible playbooks for GitLab setup
├── gitlab/ # GitLab Docker configuration
├── images/ # Architecture diagrams and screenshots
├── postman/ # API test collections
├── src/ # Application source code
│ ├── api-gateway/ # API Gateway service
│ ├── billing-app/ # Billing service
│ └── inventory-app/ # Inventory service
└── terraform/ # Infrastructure as code
- Migrate to Amazon RDS: Replace StatefulSet PostgreSQL databases with Amazon RDS for improved reliability, automatic backups, and Multi-AZ deployments
- Custom Domain: Register a custom domain name to replace the self-signed certificate with a properly validated AWS ACM certificate
- Amazon CloudFront: Add CloudFront content delivery network (CDN) for faster content delivery
- AWS WAF Integration: Add AWS Web Application Firewall for additional protection against common exploits
- Automated Rollbacks: Add automated rollback capability if deployments fail
- Extended Test Coverage: Add performance and security testing to the CI/CD pipeline
See the LICENSE file for licensing details.