Version: v1.0.0 | License: MIT | Status: Lab & PoC Ready
A complete automation framework for deploying a highly available PostgreSQL database on Azure Kubernetes Service with Premium v2 storage, CloudNativePG operator, and PgBouncer connection pooling.
β οΈ IMPORTANT: Lab and Proof-of-Concept Use Only
This code is provided strictly for lab environments and proof-of-concept purposes only. It is not intended for production use. Additional hardening, security reviews, compliance validation, and operational procedures are required before considering any production deployment.
graph TB
subgraph "Azure Subscription"
subgraph "Virtual Network (10.0.0.0/8)"
subgraph "AKS Cluster (1.32)"
subgraph "System Node Pool (2x D2s_v5)"
CNPG["CNPG Operator<br/>(cnpg-system)"]
INF["Prometheus<br/>Monitoring"]
end
subgraph "Connection Pooling Layer"
PGB1["PgBouncer Pod 1<br/>Transaction Mode<br/>10K Connections"]
PGB2["PgBouncer Pod 2<br/>Transaction Mode<br/>10K Connections"]
PGB3["PgBouncer Pod 3<br/>Transaction Mode<br/>10K Connections"]
end
subgraph "PostgreSQL Node Pool (3x E8as_v6)"
PG1["PostgreSQL Primary<br/>Instance 1<br/>200GB Data + WAL<br/>40K IOPS"]
PG2["PostgreSQL Sync Replica<br/>Instance 2 (Quorum)<br/>200GB Data + WAL<br/>40K IOPS"]
PG3["PostgreSQL Async Replica<br/>Instance 3<br/>200GB Data + WAL<br/>40K IOPS"]
end
subgraph "Kubernetes Services"
SVC_POOL_RW["Service: pg-primary-pooler-rw<br/>(PgBouncer Read-Write)<br/>Port 5432"]
SVC_POOL_RO["Service: pg-primary-pooler-ro<br/>(PgBouncer Read-Only)<br/>Port 5432"]
SVC_RW["Service: pg-primary-rw<br/>(Direct Read-Write)<br/>Port 5432"]
SVC_RO["Service: pg-primary-ro<br/>(Direct Read-Only)<br/>Port 5432"]
end
end
SVC_POOL_RW --> PGB1 & PGB2 & PGB3
SVC_POOL_RO --> PGB1 & PGB2 & PGB3
PGB1 & PGB2 & PGB3 -.->|Connection Pool| PG1
PGB1 & PGB2 & PGB3 -.->|Connection Pool| PG2 & PG3
PG1 ===|Sync Replication<br/>RPO=0| PG2
PG1 ---|Async Replication| PG3
SVC_RW --> PG1
SVC_RO --> PG2 & PG3
end
subgraph "Storage & Backup"
SA["Azure Storage Account<br/>(ZRS)<br/>Blob Backups"]
LA["Log Analytics<br/>Workspace"]
end
subgraph "Monitoring"
GRAF["Azure Managed Grafana<br/>Instance"]
AMW["Azure Monitor<br/>Workspace"]
end
subgraph "Network Security"
NSG["Network Security Group<br/>- Kubernetes API: 443<br/>- PostgreSQL: 5432"]
MI["Managed Identity<br/>(Workload Identity)"]
end
PG1 & PG2 & PG3 -->|WAL Archive + Backups| SA
CNPG & PG1 & PG2 & PG3 -->|Metrics| AMW
AMW --> GRAF
MI -->|Auth to Storage| SA
NSG -.->|Security Rules| PG1 & PG2 & PG3
end
style PG1 fill:#336791,stroke:#2d5a7b,color:#fff
style PG2 fill:#336791,stroke:#2d5a7b,color:#fff
style PG3 fill:#336791,stroke:#2d5a7b,color:#fff
style PGB1 fill:#47a8bd,stroke:#358a9c,color:#fff
style PGB2 fill:#47a8bd,stroke:#358a9c,color:#fff
style PGB3 fill:#47a8bd,stroke:#358a9c,color:#fff
style SA fill:#0078d4,stroke:#0062a3,color:#fff
style GRAF fill:#ff9830,stroke:#d67f1a,color:#fff
style AMW fill:#0078d4,stroke:#0062a3,color:#fff
style MI fill:#7fba00,stroke:#6d9b00,color:#fff
style NSG fill:#ff6b6b,stroke:#e63946,color:#fff
- Full Automation: Pure Azure CLI scripts following Microsoft reference implementation
- Separate Node Pools: 2 system nodes (D4s_v5) + 3 user nodes (E8as_v6) for workload isolation
- Zone Redundancy: Deployment across 3 Azure Availability Zones
- Premium Storage: Premium SSD v2 with 40K IOPS & 1,250 MB/s per disk (200 GiB)
- DevContainer Ready: Pre-configured environment with all tools installed
- 3-Node Cluster: 1 primary + 1 quorum sync replica + 1 async replica
- Automatic Failover: <10 second RTO with zero data loss (RPO = 0)
- Data Durability: Synchronous replication with remote_apply guarantee
- Connection Pooling: 3 PgBouncer instances handling 10,000+ concurrent connections
- Health Monitoring: Automated health checks with self-healing capabilities
- Target Throughput: Optimized for 8,000-10,000 TPS
- Dynamic Resources: PostgreSQL parameters auto-calculate from memory allocation
- Efficient Pooling: Transaction-mode pooling for optimal connection management
- Load Balancing: Automatic read distribution across replicas
- Workload Identity: Federated credentials (no secrets in pods)
- Authentication: SCRAM-SHA-256 password encryption
- Network Security: NSGs, private networking, NAT Gateway
- Encryption: At-rest and in-transit encryption
- RBAC: Kubernetes role-based access control
- Grafana Dashboards: Pre-built dashboard with 9 monitoring panels
- Prometheus Metrics: Real-time cluster health and performance metrics
- Azure Monitor: Centralized log aggregation and alerting
- CloudNativePG: 1.27.1 operator for automated lifecycle management
- Automated Backups: WAL archiving + base backups to Azure Blob Storage
- 7-Day Retention: Configurable backup retention policies
- Point-in-Time Recovery: PITR capability via WAL archives
- Geo-Redundancy: Optional GRS for disaster recovery
All tools pre-installed in isolated container with auto-generated environment:
# Requirements: Docker Desktop + VS Code Remote - Containers extension
# 1. Open project in VS Code
# 2. Ctrl+Shift+P -> "Dev Containers: Reopen in Container"
# 3. Wait for build (2-5 min first time)
# 4. .env file auto-generated with unique resource names
# 5. Tools ready: az, kubectl, helm, jq, bc, psql, netcat, kubectl-cnpg (v1.27.1)Key Features:
- Auto-generates
.envwith unique suffix and resource names - CNPG kubectl plugin v1.27.1 pre-installed
- PostgreSQL client (psql) for testing
- Network tools (netcat) for connectivity testing
- Calculator (bc) for pgbench metrics
See .devcontainer/README.md for detailed setup.
Prerequisites:
- Azure CLI (v2.56+), kubectl (v1.21+), Helm (v3.0+), jq, OpenSSL
- Azure subscription with Owner or User Access Administrator role
- Region with Premium v2 disk support
Option A: DevContainer (Recommended)
# .env is auto-generated when container starts
# Contains unique suffix and all resource names
source .env
# Verify configuration
echo "Resource Group: $RESOURCE_GROUP_NAME"
echo "AKS Cluster: $AKS_PRIMARY_CLUSTER_NAME"
# Optional: Regenerate with new suffix
./scripts/regenerate-env.shOption B: Manual Setup
# Clone repository
git clone <repo-url>
cd azure-postgresql-ha-aks-workshop
# Review and customize environment variables
code config/environment-variables.shUsing DevContainer:
# Load auto-generated environment variables
source .env
# Deploy all components (8 automated steps)
./scripts/deploy-all.shUsing Manual Setup:
# Load environment variables into current shell session
# This makes all configuration values available to deployment scripts
source config/environment-variables.sh
# Deploy all components (8 automated steps)
./scripts/deploy-all.shWhat does this do? The
sourcecommand loads all configuration variables (like resource names, regions, VM sizes) into your current terminal session. This allows the deployment scripts to access these values without hardcoding them. In DevContainer,.envis auto-generated with unique resource names; otherwise, useconfig/environment-variables.sh.
# Get cluster credentials
az aks get-credentials --resource-group <rg-name> --name <cluster-name>
# Check status
kubectl cnpg status pg-primary -n cnpg-database
# View pods
kubectl get pods -n cnpg-database -l cnpg.io/cluster=pg-primary# Run comprehensive cluster validation (in-cluster Kubernetes Job)
./scripts/07a-run-cluster-validation.shWhat gets validated:
- β Primary and replica connectivity (100% pass rate)
- β PgBouncer connection pooling (3 instances)
- β Data write operations and replication consistency
- β Read-only service routing to replicas
- β Replication health and accessibility
- β Multi-connection concurrency testing
- β‘ Completes in ~7 seconds (in-cluster execution)
# Option 1: Connect via PgBouncer (Recommended for Applications)
kubectl port-forward svc/pg-primary-pooler-rw 5432:5432 -n cnpg-database &
psql -h localhost -U app -d appdb
# Option 2: Direct connection to PostgreSQL
kubectl port-forward svc/pg-primary-rw 5432:5432 -n cnpg-database &
psql -h localhost -U app -d appdbWhy use PgBouncer?
- Handles 10,000+ concurrent connections efficiently
- Reduces PostgreSQL connection overhead
- Transaction-level pooling for optimal performance
- Automatic load distribution across replicas
| Document | Description |
|---|---|
| π SETUP_COMPLETE.md | π START HERE - Complete setup guide |
| β‘ QUICK_REFERENCE.md | Command cheat sheet |
| π° COST_ESTIMATION.md | Hourly/monthly cost breakdown (~$2,873/month) |
| π GRAFANA_DASHBOARD_GUIDE.md | Dashboard usage and metrics |
| π FAILOVER_TESTING.md | High availability testing |
| π― CNPG_BEST_PRACTICES.md | CloudNativePG 1.27 production best practices |
.env - Auto-generated (DevContainer only, gitignored)
- Unique suffix for resource names
- All Azure resource names pre-configured
- Generated at devcontainer startup
config/
βββ environment-variables.sh - Bash environment configuration template
- Resource names with random suffix
- AKS settings (version, VM sizes)
- Storage configuration (IOPS, throughput)
- PostgreSQL parameters
- Auto-detect public IP for firewall
scripts/
βββ deploy-all.sh - Master orchestration script (8 steps with logging)
βββ regenerate-env.sh - β Regenerate .env with new suffix (DevContainer)
βββ setup-prerequisites.sh - β Install required tools (non-DevContainer)
βββ 02-create-infrastructure.sh - Creates Azure resources (RG, AKS, Storage, Identity, Bastion, NAT Gateway, Container Insights)
βββ 03-configure-workload-identity.sh - Sets up federated credentials
βββ 04-deploy-cnpg-operator.sh - Installs CloudNativePG operator v1.27.1 via Helm
βββ 04a-install-barman-cloud-plugin.sh - Installs Barman Cloud Plugin v0.8.0 for backup/restore
βββ 05-deploy-postgresql-cluster.sh - Deploys PostgreSQL cluster + PgBouncer pooler + PodMonitor
βββ 06-configure-monitoring.sh - Configures Azure Managed Grafana
βββ 06a-configure-azure-monitor-prometheus.sh - Configures Azure Monitor Managed Prometheus
βββ 06b-import-grafana-dashboard.sh - β Import Grafana dashboard automatically
βββ 07-display-connection-info.sh - Displays connection endpoints and credentials
βββ 07a-run-cluster-validation.sh - β In-cluster validation (100% pass rate, 7s execution)
kubernetes/
βββ postgresql-cluster.yaml - Reference manifest (NOT used in deployment)
- See scripts/05-deploy-postgresql-cluster.sh for actual deployment
- Configuration values loaded from environment variables
π¦ azure-postgresql-ha-aks-workshop/
βββ π README.md # Main project documentation
βββ π 00_START_HERE.md # Quick start guide
βββ π CONTRIBUTING.md # Contribution guidelines
βββ π LICENSE # MIT License
β
βββ π config/ # Configuration files
β βββ environment-variables.sh # Bash environment config
β
βββ π scripts/ # Deployment automation
β βββ deploy-all.sh # Master orchestration (8 steps)
β βββ 02-create-infrastructure.sh # Azure resources + Container Insights
β βββ 03-configure-workload-identity.sh
β βββ 04-deploy-cnpg-operator.sh
β βββ 04a-install-barman-cloud-plugin.sh
β βββ 05-deploy-postgresql-cluster.sh
β βββ 06-configure-monitoring.sh # Grafana
β βββ 06a-configure-azure-monitor-prometheus.sh # Azure Monitor
β βββ 07-display-connection-info.sh
β βββ 07a-run-cluster-validation.sh # In-cluster validation
β
βββ π kubernetes/ # K8s manifests
β βββ postgresql-cluster.yaml # Reference manifest
β βββ cluster-validation-job.yaml # In-cluster validation Job
β
βββ π grafana/ # Grafana dashboards
β βββ grafana-cnpg-ha-dashboard.json # PostgreSQL HA dashboard
β
βββ π docs/ # Comprehensive documentation
β βββ README.md # Full technical guide
β βββ SETUP_COMPLETE.md # π Start here
β βββ QUICK_REFERENCE.md # Command cheat sheet
β βββ COST_ESTIMATION.md # Budget planning
β βββ PRE_DEPLOYMENT_CHECKLIST.md # Pre-flight checks
β βββ AZURE_MONITORING_SETUP.md # Monitoring setup
β βββ GRAFANA_DASHBOARD_GUIDE.md # Dashboard usage
β βββ IMPORT_DASHBOARD_NOW.md # Dashboard import
β βββ FAILOVER_TESTING.md # HA testing
β βββ VM_SETUP_GUIDE.md # Load test VM
β
βββ π .github/
βββ copilot-instructions.md # AI assistant context
- Read
docs/SETUP_COMPLETE.md- Overview and prerequisites - Review
docs/QUICK_REFERENCE.md- Command reference - Check
docs/COST_ESTIMATION.md- Budget planning - Skim
docs/README.md- Full capabilities
- Verify prerequisites installed (az, kubectl, helm, jq)
- Update
config/environment-variables.sh - Change PostgreSQL password in environment variables
- Verify region support for Premium v2
- Load environment:
source config/environment-variables.sh - Run
./scripts/deploy-all.sh - Monitor deployment progress (7 automated steps)
- Verify cluster health
- Check pods are running
- Test PostgreSQL connection
- Verify backups to storage
- Access Grafana dashboard
- Run pgbench test:
./scripts/08-test-pgbench.sh
- Monitor cluster metrics
- Test backup/restore
- Scale as needed
- Apply updates
The deployment includes 3 PgBouncer instances for high-availability connection pooling:
| Component | Configuration |
|---|---|
| Instances | 3 pods with pod anti-affinity (different nodes) |
| Mode | Transaction pooling (optimal for OLTP workloads) |
| Max Connections | 10,000 client connections per instance |
| Pool Size | 25 PostgreSQL connections per user/database |
| Total Capacity | 30,000 concurrent client connections across all instances |
# PgBouncer services (Recommended)
pg-primary-pooler-rw # Read-write via connection pool
pg-primary-pooler-ro # Read-only via connection pool
# Direct PostgreSQL services
pg-primary-rw # Direct read-write (no pooling)
pg-primary-ro # Direct read-only (no pooling)β Use PgBouncer for:
- Applications with many short-lived connections
- Microservices architectures
- Serverless workloads (Azure Functions, AWS Lambda)
- Connection-heavy applications (10K+ connections)
- High-availability workloads requiring connection efficiency
- Long-running analytical queries
- Database administration tasks
- Schema migrations
- Backup/restore operations
# Via PgBouncer (Applications)
psql "host=pg-primary-pooler-rw.cnpg-database.svc.cluster.local port=5432 dbname=appdb user=app"
# Direct (Admin tasks)
psql "host=pg-primary-rw.cnpg-database.svc.cluster.local port=5432 dbname=appdb user=app"- β Resource Group
- β Virtual Network (10.0.0.0/8)
- β Network Security Group
- β
AKS Cluster (1.32)
- System node pool: 2 x Standard_D2s_v5
- Postgres node pool: 3 x Standard_E8as_v6
- β Managed Identity (Workload Identity)
- β Storage Account (ZRS, Standard_V2)
- β Log Analytics Workspace
- β Managed Grafana Instance
- β CNPG Operator (cnpg-system namespace)
- β
PostgreSQL Cluster (cnpg-database namespace)
- 3 PostgreSQL instances (48 GiB RAM, 6 vCPU each)
- 3 PgBouncer pooler instances (transaction mode, 10K max connections)
- 200GB data storage per instance
- Premium SSD v2 disks (40,000 IOPS, 1,250 MB/s per disk)
- Expected performance: 8,000-10,000 TPS sustained
- β StorageClass (managed-csi-premium-v2)
- β Services (pooler read-write, pooler read-only, direct read-write, direct read-only)
- β ConfigMaps & Secrets
- β PersistentVolumeClaims
- β High Availability (automatic failover)
- β Zone Redundancy (across 3 AZs)
- β Workload Identity (secure auth)
- β Backup to Azure Storage
- β Point-in-Time Recovery (7 days)
- β WAL compression (lz4)
- β Monitoring (Prometheus + Grafana)
- β Health checks (automatic)
| Feature | Implementation |
|---|---|
| Authentication | Workload Identity + SCRAM-SHA-256 |
| Network | NSGs + Network Policies (Cilium) |
| Secrets | No hardcoded secrets in pods |
| RBAC | Kubernetes + Azure RBAC enabled |
| Encryption | Storage encrypted at rest |
| Backups | No public access, encrypted |
| Isolation | Dedicated namespaces |
- IOPS: 40,000 per disk (configurable 3,100-80,000)
- Throughput: 1,250 MB/s per disk (configurable 125-1,200 MB/s)
- Capacity: 200 GiB per instance
- Benefits: Excellent price-performance for high-TPS workloads (8-10K TPS)
- Regions: swedencentral, westeurope, eastus, canadacentral, etc.
- IOPS: Fixed per disk size (lower than Premium v2)
- Throughput: Fixed per disk size (lower than Premium v2)
- Benefits: Widely available, proven performance
- Tradeoff: Less cost-efficient and lower IOPS than Premium v2
- IOPS: 400K+ per disk (Standard_L8s_v3)
- Throughput: 2,000+ MB/s
- Benefits: Sub-millisecond latency, 50K+ TPS capability
- Tradeoff: Requires Azure Container Storage, higher cost
- Use Case: Extreme transactional workloads (see Step 5 documentation)
In config/environment-variables.sh:
# Azure settings
PRIMARY_CLUSTER_REGION="swedencentral"
AKS_CLUSTER_VERSION="1.32"
# VM sizes (Standard_E8as_v6: 8 vCPU, 64 GiB RAM, AMD EPYC 9004 @ 3.7 GHz)
SYSTEM_NODE_POOL_VMSKU="Standard_D2s_v5"
USER_NODE_POOL_VMSKU="Standard_E8as_v6"
# Storage (Premium SSD v2 - Optimized for 10K TPS)
DISK_IOPS="40000" # Max Premium SSD v2 IOPS
DISK_THROUGHPUT="1200" # Max Premium SSD v2 throughput (MB/s)
PG_STORAGE_SIZE="200Gi" # Increased for better performance
# PostgreSQL (Optimized for Standard_E8as_v6)
PG_DATABASE_NAME="appdb"
PG_DATABASE_USER="app"
PG_DATABASE_PASSWORD="SecurePassword123!" # Change this!
PG_MEMORY="48Gi" # 75% of 64 GiB available on E8as_v6
PG_CPU="6" # 75% of 8 vCPUs available on E8as_v6
# CNPG version (Operator 1.27.1)
CNPG_VERSION="0.26.1"All configuration is centralized in environment variables - no need to edit multiple files.
- Application Insights integration
- Container Insights (AKS logs)
- Performance metrics
- PostgreSQL metrics via PodMonitor
- Cluster health dashboards
- Performance visualization
- Alert capabilities
# PostgreSQL Metrics
pg_up # Database health
pg_stat_replication_lag_bytes # Replication lag
pg_database_size_bytes # Database size
pg_wal_archive_status # Backup status
# PgBouncer Metrics
pgbouncer_pools_cl_active # Active client connections
pgbouncer_pools_sv_active # Active server connections
pgbouncer_pools_maxwait # Connection pool wait time
pgbouncer_pools_cl_waiting # Queued client connections
# Infrastructure Metrics
node_memory_MemAvailable_bytes # Node memory
- Azure CLI (v2.56+)
- kubectl (v1.21+)
- Helm (v3.0+)
- jq (v1.5+)
- OpenSSL (v3.3+)
- Krew + CNPG plugin
- Subscription with appropriate quota
- Permissions: Owner or User Access Administrator
- Region with Premium v2 support
- Change default passwords
- Verify region support
- Check subscription quota
- Update managed identity references
- Review cost implications
Before deployment:
- Prerequisites installed
- Configuration reviewed
- Passwords changed
- Region selected
- Quota verified
After deployment:
- Cluster created
- Pods running (3 PostgreSQL + 3 PgBouncer instances)
- Storage provisioned
- Backups to storage
- Grafana accessible
- Connection successful (both direct and pooled)
# Check operator
kubectl logs -n cnpg-system deployment/cnpg-cloudnative-pg
# Check cluster status
kubectl cnpg status pg-primary -n cnpg-database
# Check all pods (PostgreSQL + PgBouncer)
kubectl get pods -n cnpg-database
# Check PgBouncer logs
kubectl logs -n cnpg-database -l cnpg.io/poolerName=pg-primary-pooler
# Check storage
kubectl get pvc -n cnpg-database
# Check backups
az storage blob list --account-name <account> --container-name backups
# Test performance
./scripts/08-test-pgbench.sh- Pods stuck in Init: Check PVC binding and storage quota
- WAL archiving fails: Verify managed identity permissions
- Operator not deploying: Check Helm repository and CRDs
- Premium v2 unavailable: Check region support
See docs/README.md for detailed troubleshooting.
-
Understand the basics
- Read: docs/SETUP_COMPLETE.md
- Review: docs/README.md
-
Explore configuration
- Edit: config/deployment-config.json
- Review: kubernetes/postgresql-cluster.yaml
-
Deploy to Azure
- Run: scripts/deploy-postgresql-ha.sh
- Monitor: kubectl commands
-
Test operations
- Connect to database
- Create backups
- Test failover
- Monitor metrics
-
Advanced topics
- Scale cluster
- Update PostgreSQL
- Performance tuning
- Backup management
Your deployment is successful when:
- β 3 PostgreSQL pods running
- β 3 PgBouncer pooler pods running
- β Primary pod shows "Primary" status
- β Replica pods show "Standby (sync)"
- β WAL archiving shows "OK"
- β Backups present in storage
- β Can connect via psql (both direct and pooled)
- β Grafana dashboard accessible
- β All PVCs bound and sized correctly
- β PgBouncer metrics showing active connections
After deployment, validate high availability with comprehensive failover tests:
# Navigate to failover testing
cd scripts/failover-testing
# Set PostgreSQL password
export PGPASSWORD=$(kubectl get secret pg-primary-app -n cnpg-database \
-o jsonpath='{.data.password}' | base64 -d)
# Run recommended scenario (PgBouncer + Simulated Failure)
./scenario-2b-aks-pooler-simulated.shAutomated AKS Pod Scenarios (ready to run):
scenario-1a-aks-direct-manual.sh- Direct PostgreSQL + Manual failoverscenario-1b-aks-direct-simulated.sh- Direct PostgreSQL + Simulated failurescenario-2a-aks-pooler-manual.sh- PgBouncer + Manual failover βscenario-2b-aks-pooler-simulated.sh- PgBouncer + Simulated failure β Recommended
Azure VM External Client Scenarios (requires VM setup):
- See
docs/VM_SETUP_GUIDE.mdfor Azure VM configuration - See
scripts/failover-testing/VM_SCENARIOS_REFERENCE.mdfor external client testing
- β RPO = 0 validation (zero data loss with synchronous replication)
- β RTO < 10s measurement (recovery time objective)
- β Connection resilience (Direct vs PgBouncer comparison)
- β Data consistency (pre/post-failover transaction verification)
- β Client reconnection (automatic vs manual)
- β Performance impact (TPS and latency during failover)
- Target TPS: 4,000-8,000 sustained (payment gateway workload)
- Failover Duration: <10 seconds (automatic promotion)
- Data Loss: Zero (RPO=0 with synchronous replication)
- PgBouncer Advantage: Transparent reconnection, <1% error rate
- Direct Connection: 5-10% error rate during failover window
- Complete Guide: docs/FAILOVER_TESTING.md
- VM Setup: docs/VM_SETUP_GUIDE.md
- Quick Reference: scripts/failover-testing/README.md
- CloudNativePG: https://cloudnative-pg.io/
- Azure AKS: https://learn.microsoft.com/en-us/azure/aks/
- Premium v2 Disks: https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types
- Well-Architected Framework: https://learn.microsoft.com/en-us/azure/architecture/framework/
Project Version: v1.0.0 (Semantic Versioning)
Release Date: October 2025
AKS Version: 1.32
Kubernetes Version: 1.32
CNPG Operator: 1.27.1
PostgreSQL: 18.0
Status: β
Lab & PoC Ready
Ready to deploy? Start with docs/SETUP_COMPLETE.md π
