-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Amateur Radio Log Server - A web-based ADIF log management system for amateur radio operators.
LogShackBaby provides a secure, containerized solution for uploading, storing, and managing amateur radio QSO logs in ADIF 3.1.6 format with multi-factor authentication and API support.
- ✅ ADIF 3.1.6 Support - Full compliance with ADIF 3.1.6 specification
- ✅ Complete ADIF Field Processing - Captures ALL ADIF fields from uploaded logs
- ✅ ADIF Log Upload - Parse and store amateur radio logs in ADIF format
- ✅ ADIF Log Export - Download logs in standard ADIF format
- ✅ Automatic Deduplication - Prevent duplicate QSO entries
- ✅ Web Interface - Clean, responsive UI for log management
- ✅ Search & Filter - Find logs by callsign, band, mode, and date
- ✅ Statistics Dashboard - View QSO counts, bands, modes, and more
- ✅ Additional Fields Display - View all extra ADIF fields captured in logs
- 🏆 Contest Management - Create and manage ham radio contests
- 🏆 Leaderboards - Real-time rankings with medals for top performers
- 🏆 Flexible Scoring - Configure points, band multipliers, and mode bonuses
- 🏆 Auto-Population - Automatically scan logs for eligible contest QSOs
- 🏆 Detailed Stats - View individual contest entries and QSO breakdowns
- 🏆 Role-Based Access - Contest admins manage, users view leaderboards
- 🔐 User Registration & Authentication - Secure account management
- 🔐 Role-Based Access Control - Four user roles: user, contestadmin, logadmin, sysop
- 🔐 Multi-Factor Authentication (MFA) - TOTP support for Google Authenticator, Authy, and Microsoft Authenticator
- 🔐 API Keys - Secure programmatic access for log uploads
- 🔐 Password Hashing - bcrypt-based secure password storage
- 🔐 Session Management - Database-backed sessions for multi-worker support
- 🐳 Containerized - Docker-based deployment
- 🐳 Microservices - Separate contest service for scalability
- 🐍 Python Backend - Flask web framework with SQLAlchemy ORM
- 🗄️ PostgreSQL Database - Reliable data storage with persistent volumes
- 🌐 JavaScript Frontend - Client-side rendering, no frameworks required
- 🔒 NGINX Reverse Proxy - SSL/TLS termination support
- Docker and Docker Compose
- 2GB RAM minimum
- 10GB disk space for logs
-
Clone or extract the project
cd /home/joe/source/logshackbaby -
Configure environment
cp .env.example .env nano .env # Edit with your secure passwords -
Generate a secure secret key
python3 -c "import secrets; print(secrets.token_hex(32))" # Add this to your .env file as SECRET_KEY
-
Start the containers
docker-compose up -d
-
Check the logs
docker-compose logs -f
-
Access the application
- Open your browser to:
http://localhost(or your server IP) - For SSL: Configure NGINX with your certificates (see NGINX Configuration)
- Open your browser to:
-
Register an account
- Click "Register" on the login page
- Enter your callsign (e.g., W1ABC)
- Provide your email and password
- Click "Register"
- Note: The first registered user automatically becomes a sysop (system administrator)
-
User Roles
- user (default) - Can manage their own logs
- contestadmin - Read-only access to all user logs with custom report generator
- logadmin - Can view and reset logs for all users
- sysop - Full administrative access to create, modify, and delete users
-
Enable Two-Factor Authentication (Recommended) 4 - Login to your account
- Go to Settings tab
- Click "Enable 2FA"
- Scan QR code with your authenticator app
- Enter the 6-digit code to verify
-
Create an API Key
- Go to "API Keys" tab
- Click "Create New API Key"
- Add a description (e.g., "N1MM Logger")
- Save the key immediately - it won't be shown again!
-
Upload Your First Log
- Go to "Upload" tab
- Choose your ADIF file (.adi or .adif)
- Click "Upload ADIF File"
- Enter your API key when prompted
Use this checklist to ensure proper deployment of LogShackBaby.
- Docker 20.10+ installed
- Docker Compose 2.0+ installed
- Minimum 2GB RAM available
- Minimum 10GB disk space available
- Ports 80, 443, 5000 available (or alternative ports configured)
- Root/sudo access for Docker
- Static IP address or domain name configured
- DNS records pointing to server (if using domain)
- Firewall rules configured:
- Port 80 (HTTP) open to internet
- Port 443 (HTTPS) open to internet
- Port 5000 only accessible locally or via Docker network
- Port 5432 (PostgreSQL) blocked from internet
- Copy
.env.exampleto.env - Generate secure database password:
python3 -c "import secrets; print(secrets.token_urlsafe(16))" - Generate secure Flask secret key:
python3 -c "import secrets; print(secrets.token_hex(32))" - Update
.envwith generated values - Verify
.envfile permissions (should be 600):chmod 600 .env
- Obtain SSL certificates (Let's Encrypt, commercial CA, etc.)
- Place certificates in
nginx/ssl/directory:- cert.pem (certificate)
- key.pem (private key)
- Update
nginx/nginx.confwith SSL configuration - Uncomment HTTPS server block in nginx.conf
- Update server_name with your domain
- Enable HTTP to HTTPS redirect
- Review
backend/app.pyconfiguration - Verify
MAX_CONTENT_LENGTH(default 16MB) - Check
SQLALCHEMY_DATABASE_URIin docker-compose.yml - Confirm worker count in Dockerfile CMD (default: 4)
- Clone/copy LogShackBaby to server:
cd /opt git clone <repo> logshackbaby # or copy files cd logshackbaby
- Create
.envfile with secure values - Verify all required files are present:
ls -la backend/ frontend/ database/ nginx/
- Start containers:
docker-compose up -d
- Check container status:
docker-compose ps
- Verify all containers are "Up" and healthy
- Check application logs:
docker-compose logs app
- Check database logs:
docker-compose logs db
- Test health endpoint:
curl http://localhost/api/health
- Expected response:
{"status":"healthy"}
- Database tables are created automatically on first start
- Verify tables exist:
docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby -c "\dt"
- Expected tables: users, api_keys, log_entries, upload_logs
- Open web interface in browser
- Create test account
- Enable MFA on test account
- Create test API key
- Upload sample_log.adi
- Verify logs appear in dashboard
- Change default passwords in
.env - Verify SECRET_KEY is random and secure
- Enable HTTPS only (disable HTTP redirect in production)
- Review CORS settings in
backend/app.py - Disable debug mode (FLASK_ENV=production)
- Database only accessible via Docker network
- Strong database password set
- Regular backup schedule established
- Database logs reviewed for suspicious activity
- Firewall configured (ufw, iptables, etc.):
ufw allow 80/tcp ufw allow 443/tcp ufw deny 5000/tcp ufw deny 5432/tcp ufw enable - Fail2ban configured for brute-force protection
- Rate limiting configured in NGINX
- Consider adding Cloudflare or similar DDoS protection
- Use TLS 1.2 or higher only
- Strong cipher suites configured
- HSTS header enabled
- SSL certificate auto-renewal configured (if using Let's Encrypt)
- Configure log rotation:
# Add to /etc/logrotate.d/logshackbaby /var/lib/docker/containers/*/*.log { rotate 7 daily compress missingok delaycompress copytruncate }
- Set up centralized logging (optional)
- Monitor disk space usage
- Automated database backup script:
#!/bin/bash docker exec logshackbaby-db pg_dump -U logshackbaby logshackbaby > \ /backup/logshackbaby-$(date +%Y%m%d).sql
- Add to crontab:
0 2 * * * /opt/logshackbaby/backup.sh
- Test backup restoration procedure
- Verify backups stored off-site
- Subscribe to security advisories
- Regular update schedule (monthly recommended)
- Update procedure documented
- Rollback procedure documented
- Set up uptime monitoring (UptimeRobot, Pingdom, etc.)
- Monitor endpoint:
http://your-domain/api/health - Alert on downtime
- Monitor disk space
- Monitor database size
- Monitor CPU/RAM usage
- All tests pass (see TESTING.md)
- SSL certificate valid and installed
- DNS configured correctly
- Backups tested and working
- Monitoring configured
- Documentation reviewed
- Admin accounts created
- MFA enabled on admin accounts
- Announce to club members
- Provide registration instructions
- Share API documentation
- Monitor logs for first 24 hours
- Be available for support questions
- Verify user registrations working
- Check upload functionality
- Monitor error logs
- Review performance metrics
- Collect user feedback
If using existing NGINX:
- Comment out nginx service in docker-compose.yml
- Add proxy configuration to existing NGINX
- Test proxy to localhost:5000
- Verify headers forwarded correctly
If integrating with existing Docker network:
- Update docker-compose.yml network configuration
- Connect to existing network:
networks: logshackbaby-network: external: true name: your-existing-network
If using existing PostgreSQL (not recommended):
- Remove db service from docker-compose.yml
- Update DATABASE_URL in .env
- Ensure network connectivity
- Create database and user manually
- Run migrations manually
- Check Docker daemon status:
systemctl status docker - Check Docker logs:
journalctl -u docker - Verify disk space:
df -h - Check Docker network:
docker network ls
- Review application logs:
docker-compose logs -f app - Check database connection
- Verify environment variables
- Test health endpoint
- Check PostgreSQL logs:
docker-compose logs -f db - Verify database is healthy:
docker exec logshackbaby-db pg_isready - Check database connections:
docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby -c "SELECT * FROM pg_stat_activity;"
- Verify ports are listening:
netstat -tlnp | grep -E "80|443|5000" - Check firewall rules:
ufw statusoriptables -L - Test connectivity:
curl -v http://localhost/api/health - Check DNS resolution
If deployment fails:
- Stop containers:
docker-compose down - Restore previous version
- Restore database backup:
docker exec -i logshackbaby-db psql -U logshackbaby logshackbaby < backup.sql
- Restart containers:
docker-compose up -d - Verify functionality
- Document issues for review
Before testing, ensure:
- Docker and Docker Compose are installed
- Ports 80, 443, and 5000 are available
- You have the sample_log.adi file
cd /home/joe/source/logshackbaby
./start.shWait for all containers to start (about 10-15 seconds).
docker-compose psExpected output:
NAME COMMAND STATUS PORTS
logshackbaby-app "gunicorn..." Up 0.0.0.0:5000->5000/tcp
logshackbaby-db "docker-entrypoint..." Up (healthy) 5432/tcp
logshackbaby-nginx "/docker-entrypoint..." Up 0.0.0.0:80->80/tcp
curl http://localhost/api/healthExpected: {"status":"healthy"}
curl -X POST http://localhost/api/register \
-H "Content-Type: application/json" \
-d '{
"callsign": "TEST1",
"email": "test@example.com",
"password": "TestPassword123"
}'Expected: Registration successful message
curl -X POST http://localhost/api/login \
-H "Content-Type: application/json" \
-d '{
"callsign": "TEST1",
"password": "TestPassword123"
}'Save the session_token from the response.
Using the session token from step 5:
SESSION_TOKEN="your_session_token_here"
curl -X POST http://localhost/api/keys \
-H "Content-Type: application/json" \
-H "X-Session-Token: $SESSION_TOKEN" \
-d '{"description": "Test Key"}'Save the api_key from the response.
API_KEY="your_api_key_here"
curl -X POST http://localhost/api/logs/upload \
-H "X-API-Key: $API_KEY" \
-F "file=@sample_log.adi"Expected: Upload successful with counts of new/duplicate records
Test that all ADIF fields are captured:
# Run automated ADIF field test
python3 test_adif_fields.pyExpected: All tests pass, showing additional fields captured
curl -X GET http://localhost/api/logs/stats \
-H "X-Session-Token: $SESSION_TOKEN"Should show 10 QSOs from the sample log.
# Get all logs
curl -X GET "http://localhost/api/logs?page=1" \
-H "X-Session-Token: $SESSION_TOKEN"
# Search by callsign
curl -X GET "http://localhost/api/logs?callsign=W1ABC" \
-H "X-Session-Token: $SESSION_TOKEN"
# Filter by band
curl -X GET "http://localhost/api/logs?band=20m" \
-H "X-Session-Token: $SESSION_TOKEN"Navigate to: http://localhost
- Click "Register"
- Enter callsign:
WEBTEST - Enter email:
web@test.com - Enter password:
WebTest123 - Confirm password
- Click "Register"
- Should redirect to login
- Enter callsign:
WEBTEST - Enter password:
WebTest123 - Click "Login"
- Should see dashboard
- Go to "Settings" tab
- Click "Enable 2FA"
- Scan QR code with authenticator app (or enter secret manually)
- Enter 6-digit code
- Click "Verify & Enable"
- Logout and login again to test MFA
- Go to "API Keys" tab
- Click "Create New API Key"
- Enter description: "Test Upload Key"
- Copy the displayed API key
- Verify key appears in the list
- Go to "Upload" tab
- Click "Choose File"
- Select
sample_log.adi - Click "Upload ADIF File"
- Enter API key when prompted
- Verify success message
- Check upload history
- Go to "My Logs" tab
- Verify 10 QSOs are displayed
- Check statistics cards show correct counts
- Test filters:
- Enter "W1" in callsign filter
- Select "20m" in band filter
- Click "Apply"
- Verify filtered results
Save as test_logshackbaby.sh:
#!/bin/bash
set -e
BASE_URL="http://localhost"
API="${BASE_URL}/api"
echo "🧪 LogShackBaby Automated Test Suite"
echo "=================================="
echo ""
# Test 1: Health Check
echo "Test 1: Health Check..."
HEALTH=$(curl -s ${API}/health)
if echo $HEALTH | grep -q "healthy"; then
echo "✅ Health check passed"
else
echo "❌ Health check failed"
exit 1
fi
# Test 2: Register User
echo ""
echo "Test 2: User Registration..."
REGISTER=$(curl -s -X POST ${API}/register \
-H "Content-Type: application/json" \
-d '{"callsign":"AUTO1","email":"auto@test.com","password":"AutoTest123"}')
if echo $REGISTER | grep -q "successful"; then
echo "✅ Registration passed"
else
echo "❌ Registration failed"
echo $REGISTER
exit 1
fi
# Test 3: Login
echo ""
echo "Test 3: User Login..."
LOGIN=$(curl -s -X POST ${API}/login \
-H "Content-Type: application/json" \
-d '{"callsign":"AUTO1","password":"AutoTest123"}')
SESSION_TOKEN=$(echo $LOGIN | grep -o '"session_token":"[^"]*' | cut -d'"' -f4)
if [ -n "$SESSION_TOKEN" ]; then
echo "✅ Login passed"
echo " Session token: ${SESSION_TOKEN:0:20}..."
else
echo "❌ Login failed"
echo $LOGIN
exit 1
fi
# Test 4: Create API Key
echo ""
echo "Test 4: Create API Key..."
API_KEY_RESPONSE=$(curl -s -X POST ${API}/keys \
-H "Content-Type: application/json" \
-H "X-Session-Token: $SESSION_TOKEN" \
-d '{"description":"Automated Test"}')
API_KEY=$(echo $API_KEY_RESPONSE | grep -o '"api_key":"[^"]*' | cut -d'"' -f4)
if [ -n "$API_KEY" ]; then
echo "✅ API key creation passed"
echo " API key: ${API_KEY:0:20}..."
else
echo "❌ API key creation failed"
echo $API_KEY_RESPONSE
exit 1
fi
# Test 5: Upload Log
echo ""
echo "Test 5: Upload ADIF Log..."
if [ -f "sample_log.adi" ]; then
UPLOAD=$(curl -s -X POST ${API}/logs/upload \
-H "X-API-Key: $API_KEY" \
-F "file=@sample_log.adi")
if echo $UPLOAD | grep -q "Upload successful"; then
echo "✅ Upload passed"
echo $UPLOAD | grep -o '"new":[0-9]*' | tr -d '"'
else
echo "❌ Upload failed"
echo $UPLOAD
exit 1
fi
else
echo "⚠️ sample_log.adi not found, skipping upload test"
fi
# Test 6: Get Statistics
echo ""
echo "Test 6: Get Statistics..."
STATS=$(curl -s -X GET ${API}/logs/stats \
-H "X-Session-Token: $SESSION_TOKEN")
if echo $STATS | grep -q "total_qsos"; then
echo "✅ Statistics passed"
TOTAL=$(echo $STATS | grep -o '"total_qsos":[0-9]*' | cut -d: -f2)
echo " Total QSOs: $TOTAL"
else
echo "❌ Statistics failed"
echo $STATS
exit 1
fi
# Test 7: Get Logs
echo ""
echo "Test 7: Get Logs..."
LOGS=$(curl -s -X GET "${API}/logs?page=1" \
-H "X-Session-Token: $SESSION_TOKEN")
if echo $LOGS | grep -q "logs"; then
echo "✅ Get logs passed"
else
echo "❌ Get logs failed"
echo $LOGS
exit 1
fi
# Test 8: Logout
echo ""
echo "Test 8: Logout..."
LOGOUT=$(curl -s -X POST ${API}/logout \
-H "X-Session-Token: $SESSION_TOKEN")
if echo $LOGOUT | grep -q "successfully"; then
echo "✅ Logout passed"
else
echo "❌ Logout failed"
echo $LOGOUT
exit 1
fi
echo ""
echo "=================================="
echo "✅ All tests passed!"
echo "=================================="Make it executable and run:
chmod +x test_logshackbaby.sh
./test_logshackbaby.shTest concurrent uploads:
# Create multiple ADIF files
for i in {1..10}; do
cp sample_log.adi test_log_${i}.adi
done
# Test upload performance
ab -n 100 -c 10 \
-H "X-API-Key: your_api_key" \
-p sample_log.adi \
-T "multipart/form-data; boundary=1234567890" \
http://localhost/api/logs/upload# Connect to database
docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby
# Check table sizes
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
# Check index usage
SELECT
schemaname,
tablename,
indexname,
idx_scan,
idx_tup_read,
idx_tup_fetch
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;# Passwords should never appear in logs
docker-compose logs app | grep -i password || echo "✅ No passwords in logs"# MFA secrets should be encrypted/hashed
docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby -c \
"SELECT callsign, mfa_enabled, LENGTH(mfa_secret) as secret_length FROM users WHERE mfa_enabled = true;"# API keys should be hashed
docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby -c \
"SELECT key_prefix, LENGTH(key_hash) as hash_length, created_at FROM api_keys LIMIT 5;"-
Check container status
docker-compose ps docker-compose logs
-
Verify database connection
docker exec logshackbaby-db pg_isready -U logshackbaby -
Check application logs
docker-compose logs -f app
-
Reset and retry
docker-compose down -v docker-compose up -d sleep 10 ./test_logshack.sh
After running all tests, you should have:
- ✅ 1-3 registered users
- ✅ 1-3 API keys
- ✅ 10+ log entries (from sample_log.adi)
- ✅ Upload history showing successful imports
- ✅ Statistics showing correct counts
# Stop and remove containers
docker-compose down
# Remove volumes (deletes all data)
docker-compose down -v
# Remove all test files
rm -f test_log_*.adiFor contestadmin users, a powerful report generator is available:
-
Navigate to Contest Admin Tab
- Click "Report Generator" subtab
-
Select Fields
- Choose from standard ADIF fields (QSO Date, Time, Call, Band, Mode, etc.)
- All 100+ ADIF 3.1.6 fields are displayed as selection options
- Fields marked with ● contain actual data in uploaded logs
- Fields without marker are available for future logs
-
Apply Filters (Optional)
- Date range (from/to)
- Bands (comma-separated: 20m, 40m, 80m)
- Modes (comma-separated: FT8, SSB, CW)
-
Generate Report
- Click "Generate Report" to view results
- Export to CSV for analysis in Excel or other tools
-
Features
- Read-only access to all user logs
- Complete ADIF 3.1.6 field selection (100+ fields available)
- Visual indicators show which fields contain data
- CSV export for external analysis
- Up to 10,000 records per report
All API endpoints except /register and /login require authentication via:
-
Session Token (Web UI):
X-Session-Tokenheader -
API Key (Programmatic):
X-API-Keyheader
POST /api/register
Content-Type: application/json
{
"callsign": "W1ABC",
"email": "w1abc@example.com",
"password": "secure_password"
}POST /api/login
Content-Type: application/json
{
"callsign": "W1ABC",
"password": "secure_password"
}Response:
{
"session_token": "...",
"callsign": "W1ABC",
"mfa_required": false
}POST /api/logs/upload
X-API-Key: your_api_key_here
Content-Type: multipart/form-data
file: <ADIF file>Response:
{
"message": "Upload successful",
"total": 150,
"new": 145,
"duplicates": 5,
"errors": 0
}GET /api/logs?page=1&per_page=50&callsign=W1ABC&band=20m&mode=SSB
X-Session-Token: your_session_tokenGET /api/logs/stats
X-Session-Token: your_session_tokenGET /api/logs/export?callsign=W1ABC&band=20m&mode=SSB
X-Session-Token: your_session_tokenResponse: ADIF file download
POST /api/keys
X-Session-Token: your_session_token
Content-Type: application/json
{
"description": "N1MM Logger"
}curl -X POST http://localhost/api/logs/upload \
-H "X-API-Key: your_api_key_here" \
-F "file=@my_log.adi"import requests
api_key = "your_api_key_here"
log_file = "my_log.adi"
with open(log_file, 'rb') as f:
response = requests.post(
'http://localhost/api/logs/upload',
headers={'X-API-Key': api_key},
files={'file': f}
)
print(response.json())LogShackBaby fully supports ADIF 3.1.6 format files and processes all ADIF fields.
Core Fields (stored in dedicated database columns for fast searching/filtering):
-
QSO_DATE- Date of QSO (YYYYMMDD) [Required] -
TIME_ON- Start time (HHMMSS) [Required] -
CALL- Contacted station's callsign [Required] -
BAND- Band (e.g., 20m, 2m) -
MODE- Mode (e.g., SSB, CW, FT8) -
FREQ- Frequency in MHz -
RST_SENT/RST_RCVD- Signal reports -
STATION_CALLSIGN- Your callsign -
MY_GRIDSQUARE- Your grid square -
GRIDSQUARE- Contacted station's grid -
NAME- Operator name -
QTH- Location -
COMMENT- QSO notes -
QSO_DATE_OFF/TIME_OFF- End date/time
All Other ADIF Fields (automatically stored in additional_fields JSON column):
The parser captures ALL fields from the ADIF 3.1.6 specification, including:
- Station details:
OPERATOR,OWNER_CALLSIGN,MY_CITY,MY_COUNTRY, etc. - Contest fields:
CONTEST_ID,SRX,STX,PRECEDENCE,CLASS, etc. - QSL tracking:
QSL_SENT,QSL_RCVD,LOTW_QSL_SENT,EQSL_QSL_RCVD, etc. - Power/Propagation:
TX_PWR,PROP_MODE,SAT_NAME,ANT_AZ, etc. - Award tracking:
AWARD_SUBMITTED,AWARD_GRANTED,CREDIT_SUBMITTED, etc. - Digital modes:
SUBMODE, application-specific fields (N1MM, LOTW, eQSL, etc.) - Location data:
LAT,LON,CNTY,STATE,COUNTRY,DXCC, etc. - And 100+ more fields...
In the web interface, the "Logs" tab shows a column called "Additional" which displays:
- A badge showing the count of additional ADIF fields captured for each QSO
- Hover over the badge to see a tooltip with all additional field names and values
- This makes it easy to see which QSOs have extra metadata
<ADIF_VER:5>3.1.6
<PROGRAMID:12>LogShackBaby
<EOH>
<CALL:5>W1ABC <QSO_DATE:8>20240101 <TIME_ON:6>143000
<BAND:3>20m <MODE:3>SSB <FREQ:8>14.250000
<RST_SENT:2>59 <RST_RCVD:2>59
<TX_PWR:3>100 <OPERATOR:5>K1XYZ <CONTEST_ID:7>CQ-WPX
<EOR>
All fields are captured and stored, preserving the complete QSO record.
If you have an existing NGINX reverse proxy with SSL:
-
Disable the built-in NGINX container
Edit
docker-compose.ymland comment out the nginx service. -
Configure your existing NGINX
Add this location block to your NGINX config:
location / { proxy_pass http://localhost:5000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 20M; }
-
Ensure Docker network access
If your NGINX is in a Docker container, add it to the
logshackbaby-network:networks: logshackbaby-network: external: true
-
Change default passwords
- Update
DB_PASSWORDin.env - Generate secure
SECRET_KEY
- Update
-
Enable HTTPS
- Configure SSL certificates in NGINX
- Force HTTPS redirects
-
Firewall rules
- Only expose port 80/443 to internet
- Keep port 5000 internal
-
Regular backups
# Backup database docker exec logshackbaby-db pg_dump -U logshackbaby logshackbaby > backup.sql # Backup with date docker exec logshackbaby-db pg_dump -U logshackbaby logshackbaby > backup-$(date +%Y%m%d).sql
-
Update regularly
docker-compose pull docker-compose up -d
docker-compose logs -f app
docker-compose logs -f dbdocker-compose restart app
docker-compose restart dbgit pull # or extract new version
docker-compose down
docker-compose build --no-cache
docker-compose up -d# Create backup
docker exec logshackbaby-db pg_dump -U logshackbaby logshackbaby > backup.sql
# Restore backup
docker exec -i logshackbaby-db psql -U logshackbaby logshackbaby < backup.sqldocker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby-- Delete logs older than 5 years
DELETE FROM log_entries
WHERE uploaded_at < NOW() - INTERVAL '5 years';
-- Vacuum database
VACUUM ANALYZE;# Check logs
docker-compose logs app
# Check database
docker-compose logs db
# Rebuild
docker-compose down
docker-compose build --no-cache
docker-compose up -d# Wait for database to be ready
docker-compose restart app
# Check database health
docker exec logshackbaby-db pg_isready -U logshackbaby- Check file is valid ADIF format
- Verify API key is correct
- Check file size (max 16MB)
- Review application logs
- Ensure time is synchronized on server and client
- TOTP codes are time-sensitive (30-second window)
- Try codes before and after current code
# Backend
cd backend
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
pip install -r requirements.txt
# Set environment variables
export DATABASE_URL="postgresql://logshackbaby:password@localhost:5432/logshackbaby"
export SECRET_KEY="your-secret-key"
# Initialize database
flask --app app init-db
# Run development server
python app.pylogshackbaby/
├── backend/ # Python Flask application
│ ├── app.py # Main application
│ ├── models.py # Database models
│ ├── auth.py # Authentication utilities
│ ├── adif_parser.py # ADIF file parser
│ ├── requirements.txt # Python dependencies
│ └── Dockerfile # Backend container
├── frontend/ # Web interface
│ ├── index.html # Main HTML
│ ├── css/
│ │ └── style.css # Styles
│ └── js/
│ └── app.js # JavaScript application
├── database/ # PostgreSQL configuration
├── nginx/ # NGINX configuration
│ ├── nginx.conf # Reverse proxy config
│ └── ssl/ # SSL certificates
├── docker-compose.yml # Container orchestration
└── README.md # This file
This project is provided as-is for amateur radio use.
For issues or questions:
- Check the troubleshooting section
- Review application logs
- Verify configuration files
- GAMIFICATION.md - Complete guide to contest management and leaderboards
- CONTEST_QUICKSTART.md - Quick start guide for setting up contests
- API_EXAMPLES.md - API usage examples
- TESTING.md - Testing procedures
Contributions welcome! Consider adding:
- Additional ADIF field support
- Log export functionality
- Real-time contest updates
- DXCC tracking
- Award tracking (WAS, WAC, etc.)
- Logbook of the World (LoTW) integration
- QRZ/HamDB lookup integration
- Contest templates and presets
- Achievement badges and milestones
73 de LogShackBaby Team 📻✨