This repository contains the IaC (Infrastructure as Code) configuration for my homelab. The setup securely connects a public VPS with my private home server network.
All services are containerized or run in a separate VM.
Security is a top priority. Only the SSH and HTTP[s] ports are exposed. All HTTP[s] traefik goes through the reverse proxy Traefik. Authelia authenticates and authorizes all requests. CrowdSec bans bad actors automatically by parsing various logs. WireGuard VPN connects all the hosts in my homelab securely.
- Infrastructure as Code: Everything is defined in OpenTofu/Terraform for reproducibility
- Automated Deployment: GitHub Actions + Ansible playbooks are used for continuous deployment.
- Comprehensive Monitoring: I deploy Prometheus, Grafana, and Gatus for full observability
The setup uses a public VPS as an internet gateway that securely tunnels traffic to my home server services which is behind a CGNAT. This approach provides:
- Availability: Critical services on the VPN keep functioning if my home has a power/network outage.
- Privacy & security: My home network remains completely isolated from the internet. Single point of entry allows effective intrusion detection.
The homelab runs various services distributed across VPS and the home server.
These services run on both VPS and home server with specific configurations for each environment:
- Traefik - Reverse proxy for routing traffic and SSL termination
- Filebrowser - Web-based file explorer for managing files
- Backrest - Web-UI for backup management built on top of restic
- Apprise - Push notification service for alerts and monitoring
Observability and monitoring stack deployed across both environments:
- Prometheus - Time-series database for metrics collection
- Node exporter - Exports system metrics from both VPS and home server
- Grafana - Dashboarding and visualization for metrics and logs
- Gatus - Service health monitoring and status page generation
- cAdvisor - Container resource usage monitoring (home server only)
Some services which don't require too much disk space or resources are hosted directly on the VPS:
- Authelia - Authentication and authorization provider with OIDC support
- Crowdsec - Collaborative security engine for blocking malicious IPs
- Wireguard - VPN server for secure tunneling to home server
- IT-tools - Collection of handy online tools for developers
- Stirling-pdf - PDF manipulation and processing tools
- Vaultwarden - Self-hosted Bitwarden-compatible password manager
- Paperless-ngx - Document management system with OCR capabilities
- Linkwarden - Bookmark and link management tool
- Readeck - Read-later and article management service
- Github Release Monitor - Monitors GitHub releases for software updates
Services hosted on the home server behind the VPN:
- Immich - High-performance photo and video management with AI features
- Jellyfin - Media server for streaming movies, TV shows, and family videos
- Audiobookshelf - Audiobook and podcast server with progress tracking
- Mealie - Recipe management and meal planning application
- qBittorrent - BitTorrent client with web interface
- Pinchflat - YouTube channel archiver using yt-dlp
- Speedtest - Self-hosted internet speed testing
- Excalidraw - Collaborative whiteboard for creating hand-drawn diagrams
Wireguard is the secret sauce that allows the outside world to connect to my home server which is behind a CGNAT. It works through a process called NAT hole punching.
This homelab uses a dual-VPN setup to securely connect the public VPS to the home server network behind a dynamic IP.
-
Inbound (Internet → Home Services):
Internet → Gateway VPS → wg0 tunnel → Home Router → wg1 → Home VM → Docker Service -
Management Access:
Admin → Gateway VPS → wg0 → Home Router → wg1 → Home VMs (via SSH jump host) -
Service Access:
- Public services are exposed through Traefik on the Gateway VPS
- Home services are tunneled through the VPN and proxied by Gateway Traefik
- Internal home services use local Traefik instance
- Network Isolation: Home network is completely isolated from internet
- Jump Host: Gateway VPS serves as SSH jump host for home server management
- Stateful Filtering: iptables rules prevent unauthorized access patterns
- Dynamic IP Support: Home server can have dynamic IP; only VPS needs static IP
The entire homelab infrastructure is defined and managed using OpenTofu/Terraform, enabling reproducible infrastructure deployments across Oracle Cloud Infrastructure (OCI) and Proxmox.
The public VPS infrastructure leverages Oracle's Always Free Tier ARM-based compute instances:
| Instance | Provider | Type | CPU | RAM | Storage | Cost | Purpose |
|---|---|---|---|---|---|---|---|
| Gateway VPS | Oracle Cloud (OCI) | VM.Standard.A1.Flex (ARM64) | 4 OCPUs | 24GB | 150GB | Free | Primary internet gateway |
| Little Gateway | Oracle Cloud (OCI) | VM.Standard.E2.1.Micro (AMD64) | 1 OCPU | 1 GB | 50GB | Free | Backup/experimentation |
The home server runs multiple VMs on Proxmox hypervisor, all defined through Terraform:
| VM Name | CPU Cores | RAM | Disk | IP Address | Purpose | Key Features |
|---|---|---|---|---|---|---|
| AdGuard Home | 2 | 512MB | 4GB | 192.168.1.107/24 | DNS server & ad blocking | Auto-configured DNS rules for homelab domains |
| WireGuard Router | 2 | 512MB | 4GB | 192.168.1.104/24 | VPN gateway to OCI VPS | Dual-interface setup (wg0↔VPS, wg1↔internal VMs) |
| Docker Services | 4 | 20GB | 100GB | 192.168.1.103/24 | Docker Compose host | NFS mounts: Jellyfin, Movies, TV, Audiobooks, eBooks, Videos, Immich |
| Tailscale Exit node | 2 | 512MB | 4GB | 192.168.1.105/24 | Backup VPN & exit node | Fallback remote connectivity to my home network |
- Home Network:
192.168.1.0/24with static IP assignments - VPN Networks:
10.13.13.0/24(gateway-to-home) and10.13.14.0/24(internal mesh) - DNS Resolution: AdGuard configured with homelab service records
- Firewall Rules: UFW rules automatically configured per service requirements
The homelab infrastructure is automatically deployed using GitHub Actions and Ansible, with secrets securely managed through Bitwarden Secrets Manager.
- Trigger: Deployments run automatically on every push to the
mainbranch or can be manually triggered - GitHub Actions Workflow: The
CD.ymlworkflow orchestrates the entire deployment process - Ansible Playbooks: Two main playbooks handle deployments:
deploy_vps.yml: Deploys services to the public VPSdeploy-home-server-docker.yml: Deploys services to the home server using the VPS as the jump host.
Each deployment follows this sequence:
-
Environment Setup:
- Install Ansible and Bitwarden SDK on the GitHub Actions runner
- Retrieve SSH keys and host information from Bitwarden Secrets Manager
- Configure SSH agent for secure host access
-
Configuration Management:
- Clone the latest repository code to target hosts
- Process
secret-mappings.ymlfiles to identify required secrets - Fetch secrets from Bitwarden using the
bwsCLI tool - Generate
.envfiles and configuration files with interpolated secrets
-
Service Deployment:
- Deploy Docker Compose services with
docker compose up - Configure system services (logrotate, cron jobs, etc.)
- Verify service health and display deployment status
- Deploy Docker Compose services with
All secrets are managed through Bitwarden Secrets Manager:
For manual deployments or troubleshooting:
# Install dependencies
pip install ansible bitwarden-sdk
ansible-galaxy collection install bitwarden.secrets
# Set Bitwarden access token
export BWS_ACCESS_TOKEN="bitwarden-access-token"
# Deploy to VPS
ansible-playbook -i ansible/inventory.yml ansible/playbooks/deploy_vps.yml
# Deploy to home server
ansible-playbook -i ansible/inventory.yml ansible/playbooks/deploy-home-server-docker.yml- Login to the Crowdsec console and enroll the node by following the instructions.
Some data is backed up using backrest (restic). The following paths are backed up:
- Vaultwarden:
/var/lib/docker-data/vaultwarden - Apprise:
/var/lib/docker-data/apprise/config - Authelia:
/var/lib/docker-data/authelia/config - Backrest:
/var/lib/docker-data/backrest/config
Unfortunately, the backrest config cannot be checked-in because it doesn't support env variable interpolation: garethgeorge/backrest#788. Hence, when setting up for the first time, the repository and backup schedule needs to be configured through the backrest web UI.
In case of data loss, restore the latest snapshot using restic.
- Restore the backup to to
/var/lib/docker-data/paperless-ngx
We don't back up some data so we need to regenerate it:
- Generate the document thumbnails: https://docs.paperless-ngx.com/administration/#thumbnails
- Generate the index: https://docs.paperless-ngx.com/administration/#index
- Train the classifier: https://docs.paperless-ngx.com/administration/#managing-the-automatic-matching-algorithm
