A Docker Compose-based PostgreSQL backup solution that performs periodic pg_dump backups and uploads them to S3-compatible storage (AWS S3, Backblaze B2, etc). We use Garage for local testing.
flowchart LR
subgraph Docker Compose Stack
PG[(PostgreSQL)]
Backup[Backup Container]
S3[(Garage S3)]
Ofelia[Ofelia Scheduler]
PG <-->|pg_dump / psql| Backup
Backup <-->|mc client| S3
Ofelia -->|triggers| Backup
end
-
Clone and configure:
cp .env.example .env # Edit .env with your settings -
Start all services:
make up
-
Trigger a manual backup:
make backup
-
List available backups:
make list-backups
-
Restore a backup:
make restore FILE=2024/01/15/myapp_120000.sql.gz
All configuration is done via environment variables. See .env.example for available options:
| Variable | Default | Description |
|---|---|---|
POSTGRES_VERSION |
18 |
PostgreSQL version (see note) |
POSTGRES_DB |
myapp |
Database name |
POSTGRES_USER |
postgres |
Database user |
POSTGRES_PASSWORD |
changeme |
Database password |
BACKUP_SCHEDULE |
@hourly |
Cron schedule (every hour) |
CLEANUP_SCHEDULE |
0 0 1 * * * |
Cleanup schedule (daily 1 AM) |
S3_ENDPOINT |
http://garage:3900 |
S3 endpoint URL |
S3_BUCKET |
backups |
S3 bucket name |
S3_ACCESS_KEY |
garage-access-key |
S3 access key |
S3_SECRET_KEY |
garage-secret-key |
S3 secret key |
Note: Ofelia uses 6-field cron format (with seconds) or shortcuts like
@hourly,@daily. See Customizing the Backup Schedule.
Retention is configured using Restic-style time-bucket policies. Policies are ORed - a backup matching ANY policy is kept.
| Variable | Default | Description |
|---|---|---|
RETENTION_KEEP_LAST |
3 |
Keep N most recent backups |
RETENTION_KEEP_HOURLY |
24 |
Keep one per hour for N hours |
RETENTION_KEEP_DAILY |
7 |
Keep one per day for N days |
RETENTION_KEEP_WEEKLY |
4 |
Keep one per week for N weeks |
RETENTION_KEEP_MONTHLY |
6 |
Keep one per month for N months |
RETENTION_KEEP_YEARLY |
2 |
Keep one per year for N years |
RETENTION_MIN_BACKUPS |
1 |
Minimum backups to keep (safety net) |
RETENTION_DRY_RUN |
false |
Preview mode (log only, no deletions) |
With hourly backups, steady-state retention after 2+ years is approximately 39 backups.
Note on PostgreSQL 18: The Docker volume mount path has changed from
/var/lib/postgresql/datato/var/lib/postgresql. This project uses the new path. If upgrading from an older PostgreSQL version, you may need to migrate your data.
make help # Show all available commands
make up # Start all services
make down # Stop all services
make backup # Trigger manual backup
make restore FILE=x # Restore specific backup
make cleanup # Run retention cleanup
make cleanup-dry-run # Preview cleanup (no deletions)
make test # Run integration tests
make test-retention # Run retention policy tests
make logs # Show logs from all services
make clean # Stop services and remove volumes
make shell-postgres # Open psql in postgres container
make shell-backup # Open shell in backup container
make list-backups # List all backups in S3Backups are stored in S3 with the following path structure:
backups/
└── YYYY/
└── MM/
└── DD/
└── dbname_HHMMSS.sql.gz
Example: backups/2024/01/15/myapp_120000.sql.gz
To use AWS S3, MinIO, or another S3-compatible service instead of Garage:
- Remove the
garageandgarage-initservices fromdocker-compose.yml - Update
.envwith your S3 credentials:S3_ENDPOINT=https://s3.amazonaws.com S3_BUCKET=your-bucket-name S3_ACCESS_KEY=your-access-key S3_SECRET_KEY=your-secret-key
For Backblaze B2:
- Create a private bucket with SSE-B2 encryption enabled
- Create an application key with Read and Write access to that bucket
- Configure lifecycle settings: "Keep only the last version of the file" (so deletions by cleanup.sh actually free storage)
- Update
.env:S3_ENDPOINT=https://s3.{region}.backblazeb2.com # e.g., s3.eu-central-003.backblazeb2.com S3_BUCKET=your-bucket-name S3_ACCESS_KEY=your-key-id S3_SECRET_KEY=your-application-key
The default schedule runs backups every hour. Modify BACKUP_SCHEDULE using cron format.
Note: Ofelia uses 6-field cron format (with seconds):
second minute hour day month weekday. You can also use shortcuts like@hourly,@daily,@weekly.
# Every hour (using shortcut - recommended)
BACKUP_SCHEDULE=@hourly
# Every hour (6-field format)
BACKUP_SCHEDULE=0 0 * * * *
# Daily at midnight
BACKUP_SCHEDULE=0 0 0 * * *
# Weekly on Sunday at 2am
BACKUP_SCHEDULE=0 0 2 * * 0Backup retention is built-in using Restic-style time-bucket policies. The cleanup job runs daily at 1 AM by default.
Preview what would be deleted:
make cleanup-dry-runRun cleanup manually:
make cleanupCustomize retention policy by setting environment variables in .env:
# Keep more monthly archives for compliance
RETENTION_KEEP_MONTHLY=12
# Keep 2 weeks of hourly backups
RETENTION_KEEP_HOURLY=336See .env.example for all available options
Copy these files to your project:
docker/
├── backup/
│ ├── Dockerfile
│ └── scripts/
│ ├── backup.sh
│ ├── restore.sh
│ └── cleanup.sh
└── garage/ # Optional: only for local S3 testing
├── garage.toml
└── scripts/
└── init.sh
services:
# Your existing app
app:
build: .
depends_on:
postgres:
condition: service_healthy
environment:
DATABASE_URL: postgres://postgres:changeme@postgres:5432/myapp
# Add these services
postgres:
image: postgres:18-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
volumes:
- postgres_data:/var/lib/postgresql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d myapp"]
interval: 10s
timeout: 5s
retries: 5
backup:
build: ./docker/backup
depends_on:
postgres:
condition: service_healthy
environment:
POSTGRES_HOST: postgres
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
S3_ENDPOINT: ${S3_ENDPOINT}
S3_BUCKET: ${S3_BUCKET}
S3_ACCESS_KEY: ${S3_ACCESS_KEY}
S3_SECRET_KEY: ${S3_SECRET_KEY}
labels:
ofelia.enabled: "true"
ofelia.job-exec.backup.schedule: "@hourly"
ofelia.job-exec.backup.command: "/scripts/backup.sh"
ofelia.job-exec.cleanup.schedule: "0 0 1 * * *"
ofelia.job-exec.cleanup.command: "/scripts/cleanup.sh"
ofelia:
image: mcuadros/ofelia:v3.0.8
depends_on:
- backup
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
command: daemon --docker
volumes:
postgres_data:# .env
POSTGRES_PASSWORD=your-secure-password
S3_ENDPOINT=https://s3.amazonaws.com # or your S3-compatible endpoint
S3_BUCKET=myapp-backups
S3_ACCESS_KEY=AKIA...
S3_SECRET_KEY=...# Start services
docker compose up -d
# Check all services are healthy
docker compose ps
# Verify backup container can reach PostgreSQL
docker compose exec backup pg_isready -h postgres -U postgres
# Verify S3 connectivity
docker compose exec backup mc alias set s3 $S3_ENDPOINT $S3_ACCESS_KEY $S3_SECRET_KEY
docker compose exec backup mc ls s3/$S3_BUCKET/# Run manual backup
docker compose exec backup /scripts/backup.sh
# Verify backup exists in S3
docker compose exec backup mc ls --recursive s3/$S3_BUCKET/# List available backups
docker compose exec backup mc ls --recursive s3/$S3_BUCKET/
# Restore (to same database or a test instance)
docker compose exec backup /scripts/restore.sh 2026/02/03/myapp_120000.sql.gzAdd this to your project as scripts/verify-backup.sh:
#!/bin/bash
set -euo pipefail
echo "=== Verifying backup configuration ==="
echo "1. Checking PostgreSQL..."
docker compose exec -T backup pg_isready -h postgres -U postgres
echo "2. Checking S3 access..."
docker compose exec -T backup mc ls s3/$S3_BUCKET/ >/dev/null
echo "3. Running test backup..."
docker compose exec -T backup /scripts/backup.sh
echo "4. Verifying backup in S3..."
BACKUP=$(docker compose exec -T backup mc ls --recursive s3/$S3_BUCKET/ | tail -1)
echo " Latest backup: $BACKUP"
echo "=== All checks passed ==="For local development, include the Garage services from this project to have a local S3-compatible store - no AWS credentials needed.
Run the integration test suite:
make testThis will:
- Start all services with ephemeral volumes
- Insert test data
- Trigger a backup
- Verify backup exists in S3
- Restore to a second PostgreSQL instance
- Verify restored data matches original
- Clean up
Check service health:
docker compose ps
docker compose logsCheck backup container logs:
make logs-backupVerify S3 connectivity:
docker compose exec backup mc ls s3/Ensure the backup file exists:
make list-backupsCheck for active connections blocking restore:
docker compose exec postgres psql -U postgres -c "SELECT * FROM pg_stat_activity WHERE datname='myapp';"This project includes a GitHub Actions workflow that:
- Lints Dockerfiles, shell scripts, and compose config
- Runs integration tests with full backup/restore cycle
- Captures logs on failure for debugging
The workflow runs on:
- Push to
mainbranch - Pull requests to
mainbranch
This project uses two tools for automated dependency updates:
| Tool | Manages | Schedule |
|---|---|---|
| Dependabot | GitHub Actions, Dockerfile | Weekly (Saturday) |
| Renovate | Docker images in docker-compose.yml | Weekly (Saturday) |
To trigger Renovate on-demand:
- Go to Actions > Renovate workflow
- Click "Run workflow"
- Optionally enable "Dry run" to preview changes without creating PRs
- Minor and patch updates are auto-merged after 3 days
- Major updates require manual review
- PostgreSQL version is managed via
POSTGRES_VERSIONenv var (not auto-updated)
dxflrs/garage- S3-compatible storagealpine- Used by garage-initmcuadros/ofelia- Job scheduler
Renovate requires a RENOVATE_TOKEN repository secret:
- Go to GitHub Settings > Developer settings > Personal access tokens > Tokens (classic)
- Generate new token with
repoandworkflowscopes - Add as repository secret named
RENOVATE_TOKEN
This project is licensed under the MIT License.