See also: Hetzner Cloud Playbook.
This repository contains setup scripts for all servers required to host an attack-defense CTF with the saarCTF framework. It consists of three server types:
- Gameserver (Controller): Databases, Scoreboard, Flag submitter, Control panels, Monitoring
- VPN: OpenVPN, Routing, Firewall, Anonymization
- Checker: Checker script runner (multiple instances possible)
First, you need a git configuration repository and a local config.json configuration for the build process.
Second, you need a packer configuration to create images.
See Configuration for details on both.
To prepare server images (for libvirt): From the root directory of this repo, run:
packer init basispacker build -var-file global-variables.pkrvars.hcl basispacker build -var-file global-variables.pkrvars.hcl controllerpacker build -var-file global-variables.pkrvars.hcl vpnpacker build -var-file global-variables.pkrvars.hcl checker
To create and launch the VMs (in libvirt), see libvirt-test-setup/README.md.
By default, these IPs are used:
- VPN Gateway:
10.32.250.1 - Gameserver:
10.32.250.2 - Checker server:
10.32.250.3+ - Team N:
10.32.N.0/24 - Nop-Team:
10.32.1.0/24 - Organizer network:
10.32.0.0/24
Interesting commands: update-server (pull updates and rebuild things).
Checker scripts belong on this machine (/home/saarctf/checkers, owned by user saarctf).
CTF Timer needs manual start (on exactly one machine), new hosts must be manually added to Prometheus for monitoring (/root/prometheus-add-server.sh <ip>).
A bunch of interesting scripts are in /root, check them out.
This image can be reconfigured to fill in almost any particular role, using the scripts in /root to disable some components. We think of backup servers (databases to slaves replicating original databases), dedicated management / monitoring server (db to slave, disable monitoring on original host) or dedicated scoreboard / submitter server (db off, monitoring off, systemctl start scoreboard).
Postgresql (:5432), Redis (:6379), RabbitMQ.
Flask app running under uwsgi / user saarctf. Nginx frontend.
http://<ip>:8080/- Restart:
systemctl restart uwsgi - Logs:
/var/log/uwsgi/app/controlserver.log
Tornado app (systemd), Nginx frontend.
http://<ip>:8081//http://127.0.0.1:20000- Restart:
systemctl restart flower - Logs:
/var/log/flower.log
Coder running in docker container as user saarctf.
http://<ip>:8082/- Restart:
docker restart coder-server - Logs:
docker logs coder-server
Static folder with files, served by nginx.
http://<ip>/- Restart:
systemctl restart nginx - Logs:
/var/log/nginx/access.logand/var/log/nginx/error.log - Config:
/etc/nginx/sites-available/scoreboard
The scoreboard is automatically created if the CTF timer is running on this machine. If not use the scoreboard daemon instead (systemctl start scoreboard, but not in parallel with the CTF timer).
Needs manual start
Triggers time-based events (new round, scoreboard rebuild, start/stop of CTF). Exactly one instance on one server must run at any time.
- Start:
systemctl start ctftimer - Restart:
systemctl restart ctftimer - Logs:
/var/log/ctftimer.log(interesting messages usually shown in controlpanel dashboard)
C++ application that receives flags from teams.
nc <ip> 31337- Restart:
systemctl restart submission-server - Rebuild/Update:
update-server - Logs:
/var/log/submission-server.log
Monitors itself, grafana and localhost by default, other servers should be manually added using /root/prometheus-add-server.sh <ip>. Results can be seen in Grafana.
http://localhost:9090/- Restart:
systemctl restart prometheus - Logs:
journalctl -u prometheus
Configured to display stats from database and prometheus.
http://<ip>:3000/- Restart:
systemctl restart grafana - Logs:
/var/log/grafana/grafana.log - Config:
/etc/grafana/grafana.ini
tcpdump needs manual start / stop
Runs many OpenVPN instances and network management. OpenVPN configuration files should be (re)built at least once on this machine.
Three servers per team, managed by systemd. Service name is:
vpn@teamXYZ(tunX) for the single, self-hosted VPN (whole /24 in one connection)vpn2@teamXYZ-cloud(tun100X) for the cloud-hosted team members endpoint (upper /25, multiple connections possible)vpn@teamXYZ-vulnbox(tun200X) for the single, cloud-hosted vulnbox connection (/30 for cloud box, config not given to players)
Activation rules:
vpn@teamX-vulnboxis always active (players can't mess too much with it, except by booting a vulnbox)- If
vpn@teamX-vulnboxis connected, team-hosted vpnvpn@teamXis down (avoiding conflicts with team members using the old config) - If
vpn@teamXis connected, cloud-hosted player vpnvpn2@teamX-cloudis down (avoid both configs being used at the same time)
- Server for Team X listening on
<ip>:10000+X / <ip>:12000+X / <ip>:1400+X (udp) - Interface of Team X:
tunX/tun100X/tun200X - Start one / all:
systemctl start vpn@teamX/systemctl start vpn - Restart one / all:
systemctl restart vpn@teamX/systemctl restart vpn vpn@\* - Stop one / all:
systemctl stop vpn@teamX/systemctl stop vpn vpn@\* - Logs:
/var/log/openvpn/output-teamX.logand/var/log/openvpn/openvpn-status-teamX.log
Writes traffic summary to database.
- Restart:
systemctl restart trafficstats - Logs:
/var/log/trafficstats.log
Based on IPTables.
Edit /opt/gameserver/vpn/iptables.sh if you need to change something permanently.
On restart, INPUT and FORWARD chains are replaced.
Inserts rules for NAT and TCP Timestamp removal.
- Restart:
systemctl restart firewall - Logs:
/var/log/firewall.log
Inserts rules into IPTables that open/close VPN or ban single teams.
- Restart:
systemctl restart manage-iptables - Logs:
/var/log/firewall.log
Needs manual start
Captures traffic: game traffic (between gameservers and teams) and team traffic (between teams).
- Start:
systemctl start tcpdump-game tcpdump-team - Restart:
systemctl restart tcpdump-game tcpdump-team - Logs:
/var/log/tcpdump.log
Website that displays connection status of VPN connections and tests connectivity using ping.
Website is finally served by an nginx.
Includes a background worker (service vpnboard-celery).
- Restart:
systemctl restart vpnboard - Logs:
/var/log/vpnboard.log
Runs only a Celery Worker. No checker scripts need to be placed on this machine.
Needs manual start because each celery worker needs an unique name.
After creation run celery-configure <SERVER-NUMBER>.
From now on, the celery worker will start on boot.
- Configuration:
/etc/celery.conf - Restart:
systemctl restart celery - Logs:
/var/log/celery/XYZ.log
Manual worker invocation:
screen -R celerycelery-run <unique-hostname> <number-of-processes>