Skip to content
Joe Cupano edited this page Jan 31, 2026 · 2 revisions

LogShackBaby 📻

Amateur Radio Log Server - A web-based ADIF log management system for amateur radio operators.

LogShackBaby provides a secure, containerized solution for uploading, storing, and managing amateur radio QSO logs in ADIF 3.1.6 format with multi-factor authentication and API support.

Features

Core Functionality

  • ADIF 3.1.6 Support - Full compliance with ADIF 3.1.6 specification
  • Complete ADIF Field Processing - Captures ALL ADIF fields from uploaded logs
  • ADIF Log Upload - Parse and store amateur radio logs in ADIF format
  • ADIF Log Export - Download logs in standard ADIF format
  • Automatic Deduplication - Prevent duplicate QSO entries
  • Web Interface - Clean, responsive UI for log management
  • Search & Filter - Find logs by callsign, band, mode, and date
  • Statistics Dashboard - View QSO counts, bands, modes, and more
  • Additional Fields Display - View all extra ADIF fields captured in logs

Gamification & Contests

  • 🏆 Contest Management - Create and manage ham radio contests
  • 🏆 Leaderboards - Real-time rankings with medals for top performers
  • 🏆 Flexible Scoring - Configure points, band multipliers, and mode bonuses
  • 🏆 Auto-Population - Automatically scan logs for eligible contest QSOs
  • 🏆 Detailed Stats - View individual contest entries and QSO breakdowns
  • 🏆 Role-Based Access - Contest admins manage, users view leaderboards

Security

  • 🔐 User Registration & Authentication - Secure account management
  • 🔐 Role-Based Access Control - Four user roles: user, contestadmin, logadmin, sysop
  • 🔐 Multi-Factor Authentication (MFA) - TOTP support for Google Authenticator, Authy, and Microsoft Authenticator
  • 🔐 API Keys - Secure programmatic access for log uploads
  • 🔐 Password Hashing - bcrypt-based secure password storage
  • 🔐 Session Management - Database-backed sessions for multi-worker support

Architecture

  • 🐳 Containerized - Docker-based deployment
  • 🐳 Microservices - Separate contest service for scalability
  • 🐍 Python Backend - Flask web framework with SQLAlchemy ORM
  • 🗄️ PostgreSQL Database - Reliable data storage with persistent volumes
  • 🌐 JavaScript Frontend - Client-side rendering, no frameworks required
  • 🔒 NGINX Reverse Proxy - SSL/TLS termination support

Setup & Installation

Prerequisites

  • Docker and Docker Compose
  • 2GB RAM minimum
  • 10GB disk space for logs

Installation

  1. Clone or extract the project

    cd /home/joe/source/logshackbaby
  2. Configure environment

    cp .env.example .env
    nano .env  # Edit with your secure passwords
  3. Generate a secure secret key

    python3 -c "import secrets; print(secrets.token_hex(32))"
    # Add this to your .env file as SECRET_KEY
  4. Start the containers

    docker-compose up -d
  5. Check the logs

    docker-compose logs -f
  6. Access the application

    • Open your browser to: http://localhost (or your server IP)
    • For SSL: Configure NGINX with your certificates (see NGINX Configuration)

First Time Setup

  1. Register an account

    • Click "Register" on the login page
    • Enter your callsign (e.g., W1ABC)
    • Provide your email and password
    • Click "Register"
    • Note: The first registered user automatically becomes a sysop (system administrator)
  2. User Roles

    • user (default) - Can manage their own logs
    • contestadmin - Read-only access to all user logs with custom report generator
    • logadmin - Can view and reset logs for all users
    • sysop - Full administrative access to create, modify, and delete users
  3. Enable Two-Factor Authentication (Recommended) 4 - Login to your account

    • Go to Settings tab
    • Click "Enable 2FA"
    • Scan QR code with your authenticator app
    • Enter the 6-digit code to verify
  4. Create an API Key

    • Go to "API Keys" tab
    • Click "Create New API Key"
    • Add a description (e.g., "N1MM Logger")
    • Save the key immediately - it won't be shown again!
  5. Upload Your First Log

    • Go to "Upload" tab
    • Choose your ADIF file (.adi or .adif)
    • Click "Upload ADIF File"
    • Enter your API key when prompted

Pre-Deployment Checklist

Use this checklist to ensure proper deployment of LogShackBaby.

Server Requirements

  • Docker 20.10+ installed
  • Docker Compose 2.0+ installed
  • Minimum 2GB RAM available
  • Minimum 10GB disk space available
  • Ports 80, 443, 5000 available (or alternative ports configured)
  • Root/sudo access for Docker

Network Requirements

  • Static IP address or domain name configured
  • DNS records pointing to server (if using domain)
  • Firewall rules configured:
    • Port 80 (HTTP) open to internet
    • Port 443 (HTTPS) open to internet
    • Port 5000 only accessible locally or via Docker network
    • Port 5432 (PostgreSQL) blocked from internet

Environment Setup

  • Copy .env.example to .env
  • Generate secure database password:
    python3 -c "import secrets; print(secrets.token_urlsafe(16))"
  • Generate secure Flask secret key:
    python3 -c "import secrets; print(secrets.token_hex(32))"
  • Update .env with generated values
  • Verify .env file permissions (should be 600):
    chmod 600 .env

SSL/TLS Configuration (Production)

  • Obtain SSL certificates (Let's Encrypt, commercial CA, etc.)
  • Place certificates in nginx/ssl/ directory:
    • cert.pem (certificate)
    • key.pem (private key)
  • Update nginx/nginx.conf with SSL configuration
  • Uncomment HTTPS server block in nginx.conf
  • Update server_name with your domain
  • Enable HTTP to HTTPS redirect

Application Configuration

  • Review backend/app.py configuration
  • Verify MAX_CONTENT_LENGTH (default 16MB)
  • Check SQLALCHEMY_DATABASE_URI in docker-compose.yml
  • Confirm worker count in Dockerfile CMD (default: 4)

Deployment Steps

1. Initial Deployment

  • Clone/copy LogShackBaby to server:
    cd /opt
    git clone <repo> logshackbaby  # or copy files
    cd logshackbaby

2. Configure Environment

  • Create .env file with secure values
  • Verify all required files are present:
    ls -la backend/ frontend/ database/ nginx/

3. Start Services

  • Start containers:
    docker-compose up -d
  • Check container status:
    docker-compose ps
  • Verify all containers are "Up" and healthy

4. Verify Deployment

  • Check application logs:
    docker-compose logs app
  • Check database logs:
    docker-compose logs db
  • Test health endpoint:
    curl http://localhost/api/health
  • Expected response: {"status":"healthy"}

5. Initialize Database

  • Database tables are created automatically on first start
  • Verify tables exist:
    docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby -c "\dt"
  • Expected tables: users, api_keys, log_entries, upload_logs

6. Test Functionality

  • Open web interface in browser
  • Create test account
  • Enable MFA on test account
  • Create test API key
  • Upload sample_log.adi
  • Verify logs appear in dashboard

Security Hardening

Application Security

  • Change default passwords in .env
  • Verify SECRET_KEY is random and secure
  • Enable HTTPS only (disable HTTP redirect in production)
  • Review CORS settings in backend/app.py
  • Disable debug mode (FLASK_ENV=production)

Database Security

  • Database only accessible via Docker network
  • Strong database password set
  • Regular backup schedule established
  • Database logs reviewed for suspicious activity

Network Security

  • Firewall configured (ufw, iptables, etc.):
    ufw allow 80/tcp
    ufw allow 443/tcp
    ufw deny 5000/tcp
    ufw deny 5432/tcp
    ufw enable
  • Fail2ban configured for brute-force protection
  • Rate limiting configured in NGINX
  • Consider adding Cloudflare or similar DDoS protection

SSL/TLS Security

  • Use TLS 1.2 or higher only
  • Strong cipher suites configured
  • HSTS header enabled
  • SSL certificate auto-renewal configured (if using Let's Encrypt)

Monitoring & Maintenance

Logging

  • Configure log rotation:
    # Add to /etc/logrotate.d/logshackbaby
    /var/lib/docker/containers/*/*.log {
      rotate 7
      daily
      compress
      missingok
      delaycompress
      copytruncate
    }
  • Set up centralized logging (optional)
  • Monitor disk space usage

Backups

  • Automated database backup script:
    #!/bin/bash
    docker exec logshackbaby-db pg_dump -U logshackbaby logshackbaby > \
      /backup/logshackbaby-$(date +%Y%m%d).sql
  • Add to crontab:
    0 2 * * * /opt/logshackbaby/backup.sh
  • Test backup restoration procedure
  • Verify backups stored off-site

Updates

  • Subscribe to security advisories
  • Regular update schedule (monthly recommended)
  • Update procedure documented
  • Rollback procedure documented

Health Checks

  • Set up uptime monitoring (UptimeRobot, Pingdom, etc.)
  • Monitor endpoint: http://your-domain/api/health
  • Alert on downtime
  • Monitor disk space
  • Monitor database size
  • Monitor CPU/RAM usage

Production Checklist

Before Go-Live

  • All tests pass (see TESTING.md)
  • SSL certificate valid and installed
  • DNS configured correctly
  • Backups tested and working
  • Monitoring configured
  • Documentation reviewed
  • Admin accounts created
  • MFA enabled on admin accounts

Go-Live

  • Announce to club members
  • Provide registration instructions
  • Share API documentation
  • Monitor logs for first 24 hours
  • Be available for support questions

Post-Deployment

  • Verify user registrations working
  • Check upload functionality
  • Monitor error logs
  • Review performance metrics
  • Collect user feedback

Integration with Existing Infrastructure

Existing NGINX Proxy

If using existing NGINX:

  • Comment out nginx service in docker-compose.yml
  • Add proxy configuration to existing NGINX
  • Test proxy to localhost:5000
  • Verify headers forwarded correctly

Existing Docker Network

If integrating with existing Docker network:

  • Update docker-compose.yml network configuration
  • Connect to existing network:
    networks:
      logshackbaby-network:
        external: true
        name: your-existing-network

Existing PostgreSQL

If using existing PostgreSQL (not recommended):

  • Remove db service from docker-compose.yml
  • Update DATABASE_URL in .env
  • Ensure network connectivity
  • Create database and user manually
  • Run migrations manually

Troubleshooting

Container Issues

  • Check Docker daemon status: systemctl status docker
  • Check Docker logs: journalctl -u docker
  • Verify disk space: df -h
  • Check Docker network: docker network ls

Application Issues

  • Review application logs: docker-compose logs -f app
  • Check database connection
  • Verify environment variables
  • Test health endpoint

Database Issues

  • Check PostgreSQL logs: docker-compose logs -f db
  • Verify database is healthy: docker exec logshackbaby-db pg_isready
  • Check database connections: docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby -c "SELECT * FROM pg_stat_activity;"

Network Issues

  • Verify ports are listening: netstat -tlnp | grep -E "80|443|5000"
  • Check firewall rules: ufw status or iptables -L
  • Test connectivity: curl -v http://localhost/api/health
  • Check DNS resolution

Rollback Procedure

If deployment fails:

  1. Stop containers: docker-compose down
  2. Restore previous version
  3. Restore database backup:
    docker exec -i logshackbaby-db psql -U logshackbaby logshackbaby < backup.sql
  4. Restart containers: docker-compose up -d
  5. Verify functionality
  6. Document issues for review

Testing

Before testing, ensure:

  • Docker and Docker Compose are installed
  • Ports 80, 443, and 5000 are available
  • You have the sample_log.adi file

Quick Test Procedure

1. Start the Application

cd /home/joe/source/logshackbaby
./start.sh

Wait for all containers to start (about 10-15 seconds).

2. Verify Containers are Running

docker-compose ps

Expected output:

NAME                COMMAND                  STATUS              PORTS
logshackbaby-app        "gunicorn..."            Up                  0.0.0.0:5000->5000/tcp
logshackbaby-db         "docker-entrypoint..."   Up (healthy)        5432/tcp
logshackbaby-nginx      "/docker-entrypoint..."  Up                  0.0.0.0:80->80/tcp

3. Test Health Endpoint

curl http://localhost/api/health

Expected: {"status":"healthy"}

4. Test User Registration

curl -X POST http://localhost/api/register \
  -H "Content-Type: application/json" \
  -d '{
    "callsign": "TEST1",
    "email": "test@example.com",
    "password": "TestPassword123"
  }'

Expected: Registration successful message

5. Test User Login

curl -X POST http://localhost/api/login \
  -H "Content-Type: application/json" \
  -d '{
    "callsign": "TEST1",
    "password": "TestPassword123"
  }'

Save the session_token from the response.

6. Create API Key

Using the session token from step 5:

SESSION_TOKEN="your_session_token_here"

curl -X POST http://localhost/api/keys \
  -H "Content-Type: application/json" \
  -H "X-Session-Token: $SESSION_TOKEN" \
  -d '{"description": "Test Key"}'

Save the api_key from the response.

7. Upload Sample Log

API_KEY="your_api_key_here"

curl -X POST http://localhost/api/logs/upload \
  -H "X-API-Key: $API_KEY" \
  -F "file=@sample_log.adi"

Expected: Upload successful with counts of new/duplicate records

8. Test ADIF Field Processing

Test that all ADIF fields are captured:

# Run automated ADIF field test
python3 test_adif_fields.py

Expected: All tests pass, showing additional fields captured

9. View Statistics

curl -X GET http://localhost/api/logs/stats \
  -H "X-Session-Token: $SESSION_TOKEN"

Should show 10 QSOs from the sample log.

9. Search Logs

# Get all logs
curl -X GET "http://localhost/api/logs?page=1" \
  -H "X-Session-Token: $SESSION_TOKEN"

# Search by callsign
curl -X GET "http://localhost/api/logs?callsign=W1ABC" \
  -H "X-Session-Token: $SESSION_TOKEN"

# Filter by band
curl -X GET "http://localhost/api/logs?band=20m" \
  -H "X-Session-Token: $SESSION_TOKEN"

Web UI Testing

1. Open Browser

Navigate to: http://localhost

2. Test Registration Flow

  1. Click "Register"
  2. Enter callsign: WEBTEST
  3. Enter email: web@test.com
  4. Enter password: WebTest123
  5. Confirm password
  6. Click "Register"
  7. Should redirect to login

3. Test Login Flow

  1. Enter callsign: WEBTEST
  2. Enter password: WebTest123
  3. Click "Login"
  4. Should see dashboard

4. Test MFA Setup

  1. Go to "Settings" tab
  2. Click "Enable 2FA"
  3. Scan QR code with authenticator app (or enter secret manually)
  4. Enter 6-digit code
  5. Click "Verify & Enable"
  6. Logout and login again to test MFA

5. Test API Key Creation

  1. Go to "API Keys" tab
  2. Click "Create New API Key"
  3. Enter description: "Test Upload Key"
  4. Copy the displayed API key
  5. Verify key appears in the list

6. Test Log Upload

  1. Go to "Upload" tab
  2. Click "Choose File"
  3. Select sample_log.adi
  4. Click "Upload ADIF File"
  5. Enter API key when prompted
  6. Verify success message
  7. Check upload history

7. Test Log Viewing

  1. Go to "My Logs" tab
  2. Verify 10 QSOs are displayed
  3. Check statistics cards show correct counts
  4. Test filters:
    • Enter "W1" in callsign filter
    • Select "20m" in band filter
    • Click "Apply"
  5. Verify filtered results

Automated Test Script

Save as test_logshackbaby.sh:

#!/bin/bash

set -e

BASE_URL="http://localhost"
API="${BASE_URL}/api"

echo "🧪 LogShackBaby Automated Test Suite"
echo "=================================="
echo ""

# Test 1: Health Check
echo "Test 1: Health Check..."
HEALTH=$(curl -s ${API}/health)
if echo $HEALTH | grep -q "healthy"; then
    echo "✅ Health check passed"
else
    echo "❌ Health check failed"
    exit 1
fi

# Test 2: Register User
echo ""
echo "Test 2: User Registration..."
REGISTER=$(curl -s -X POST ${API}/register \
    -H "Content-Type: application/json" \
    -d '{"callsign":"AUTO1","email":"auto@test.com","password":"AutoTest123"}')

if echo $REGISTER | grep -q "successful"; then
    echo "✅ Registration passed"
else
    echo "❌ Registration failed"
    echo $REGISTER
    exit 1
fi

# Test 3: Login
echo ""
echo "Test 3: User Login..."
LOGIN=$(curl -s -X POST ${API}/login \
    -H "Content-Type: application/json" \
    -d '{"callsign":"AUTO1","password":"AutoTest123"}')

SESSION_TOKEN=$(echo $LOGIN | grep -o '"session_token":"[^"]*' | cut -d'"' -f4)

if [ -n "$SESSION_TOKEN" ]; then
    echo "✅ Login passed"
    echo "   Session token: ${SESSION_TOKEN:0:20}..."
else
    echo "❌ Login failed"
    echo $LOGIN
    exit 1
fi

# Test 4: Create API Key
echo ""
echo "Test 4: Create API Key..."
API_KEY_RESPONSE=$(curl -s -X POST ${API}/keys \
    -H "Content-Type: application/json" \
    -H "X-Session-Token: $SESSION_TOKEN" \
    -d '{"description":"Automated Test"}')

API_KEY=$(echo $API_KEY_RESPONSE | grep -o '"api_key":"[^"]*' | cut -d'"' -f4)

if [ -n "$API_KEY" ]; then
    echo "✅ API key creation passed"
    echo "   API key: ${API_KEY:0:20}..."
else
    echo "❌ API key creation failed"
    echo $API_KEY_RESPONSE
    exit 1
fi

# Test 5: Upload Log
echo ""
echo "Test 5: Upload ADIF Log..."
if [ -f "sample_log.adi" ]; then
    UPLOAD=$(curl -s -X POST ${API}/logs/upload \
        -H "X-API-Key: $API_KEY" \
        -F "file=@sample_log.adi")
    
    if echo $UPLOAD | grep -q "Upload successful"; then
        echo "✅ Upload passed"
        echo $UPLOAD | grep -o '"new":[0-9]*' | tr -d '"'
    else
        echo "❌ Upload failed"
        echo $UPLOAD
        exit 1
    fi
else
    echo "⚠️  sample_log.adi not found, skipping upload test"
fi

# Test 6: Get Statistics
echo ""
echo "Test 6: Get Statistics..."
STATS=$(curl -s -X GET ${API}/logs/stats \
    -H "X-Session-Token: $SESSION_TOKEN")

if echo $STATS | grep -q "total_qsos"; then
    echo "✅ Statistics passed"
    TOTAL=$(echo $STATS | grep -o '"total_qsos":[0-9]*' | cut -d: -f2)
    echo "   Total QSOs: $TOTAL"
else
    echo "❌ Statistics failed"
    echo $STATS
    exit 1
fi

# Test 7: Get Logs
echo ""
echo "Test 7: Get Logs..."
LOGS=$(curl -s -X GET "${API}/logs?page=1" \
    -H "X-Session-Token: $SESSION_TOKEN")

if echo $LOGS | grep -q "logs"; then
    echo "✅ Get logs passed"
else
    echo "❌ Get logs failed"
    echo $LOGS
    exit 1
fi

# Test 8: Logout
echo ""
echo "Test 8: Logout..."
LOGOUT=$(curl -s -X POST ${API}/logout \
    -H "X-Session-Token: $SESSION_TOKEN")

if echo $LOGOUT | grep -q "successfully"; then
    echo "✅ Logout passed"
else
    echo "❌ Logout failed"
    echo $LOGOUT
    exit 1
fi

echo ""
echo "=================================="
echo "✅ All tests passed!"
echo "=================================="

Make it executable and run:

chmod +x test_logshackbaby.sh
./test_logshackbaby.sh

Performance Testing

Load Test with Apache Bench

Test concurrent uploads:

# Create multiple ADIF files
for i in {1..10}; do
    cp sample_log.adi test_log_${i}.adi
done

# Test upload performance
ab -n 100 -c 10 \
   -H "X-API-Key: your_api_key" \
   -p sample_log.adi \
   -T "multipart/form-data; boundary=1234567890" \
   http://localhost/api/logs/upload

Database Performance

# Connect to database
docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby

# Check table sizes
SELECT 
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

# Check index usage
SELECT 
    schemaname,
    tablename,
    indexname,
    idx_scan,
    idx_tup_read,
    idx_tup_fetch
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;

Security Testing

Test Password Hashing

# Passwords should never appear in logs
docker-compose logs app | grep -i password || echo "✅ No passwords in logs"

Test MFA

# MFA secrets should be encrypted/hashed
docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby -c \
    "SELECT callsign, mfa_enabled, LENGTH(mfa_secret) as secret_length FROM users WHERE mfa_enabled = true;"

Test API Key Security

# API keys should be hashed
docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby -c \
    "SELECT key_prefix, LENGTH(key_hash) as hash_length, created_at FROM api_keys LIMIT 5;"

Troubleshooting Tests

If tests fail:

  1. Check container status

    docker-compose ps
    docker-compose logs
  2. Verify database connection

    docker exec logshackbaby-db pg_isready -U logshackbaby
  3. Check application logs

    docker-compose logs -f app
  4. Reset and retry

    docker-compose down -v
    docker-compose up -d
    sleep 10
    ./test_logshack.sh

Expected Test Results

After running all tests, you should have:

  • ✅ 1-3 registered users
  • ✅ 1-3 API keys
  • ✅ 10+ log entries (from sample_log.adi)
  • ✅ Upload history showing successful imports
  • ✅ Statistics showing correct counts

Clean Up After Testing

# Stop and remove containers
docker-compose down

# Remove volumes (deletes all data)
docker-compose down -v

# Remove all test files
rm -f test_log_*.adi

Contest Administration

Report Generator

For contestadmin users, a powerful report generator is available:

  1. Navigate to Contest Admin Tab

    • Click "Report Generator" subtab
  2. Select Fields

    • Choose from standard ADIF fields (QSO Date, Time, Call, Band, Mode, etc.)
    • All 100+ ADIF 3.1.6 fields are displayed as selection options
    • Fields marked with ● contain actual data in uploaded logs
    • Fields without marker are available for future logs
  3. Apply Filters (Optional)

    • Date range (from/to)
    • Bands (comma-separated: 20m, 40m, 80m)
    • Modes (comma-separated: FT8, SSB, CW)
  4. Generate Report

    • Click "Generate Report" to view results
    • Export to CSV for analysis in Excel or other tools
  5. Features

    • Read-only access to all user logs
    • Complete ADIF 3.1.6 field selection (100+ fields available)
    • Visual indicators show which fields contain data
    • CSV export for external analysis
    • Up to 10,000 records per report

API Documentation

Authentication

All API endpoints except /register and /login require authentication via:

  • Session Token (Web UI): X-Session-Token header
  • API Key (Programmatic): X-API-Key header

Endpoints

User Registration

POST /api/register
Content-Type: application/json

{
  "callsign": "W1ABC",
  "email": "w1abc@example.com",
  "password": "secure_password"
}

User Login

POST /api/login
Content-Type: application/json

{
  "callsign": "W1ABC",
  "password": "secure_password"
}

Response:

{
  "session_token": "...",
  "callsign": "W1ABC",
  "mfa_required": false
}

Upload ADIF Log

POST /api/logs/upload
X-API-Key: your_api_key_here
Content-Type: multipart/form-data

file: <ADIF file>

Response:

{
  "message": "Upload successful",
  "total": 150,
  "new": 145,
  "duplicates": 5,
  "errors": 0
}

Get Logs

GET /api/logs?page=1&per_page=50&callsign=W1ABC&band=20m&mode=SSB
X-Session-Token: your_session_token

Get Statistics

GET /api/logs/stats
X-Session-Token: your_session_token

Export Logs (ADIF)

GET /api/logs/export?callsign=W1ABC&band=20m&mode=SSB
X-Session-Token: your_session_token

Response: ADIF file download

Create API Key

POST /api/keys
X-Session-Token: your_session_token
Content-Type: application/json

{
  "description": "N1MM Logger"
}

Example: Upload Log with cURL

curl -X POST http://localhost/api/logs/upload \
  -H "X-API-Key: your_api_key_here" \
  -F "file=@my_log.adi"

Example: Upload Log with Python

import requests

api_key = "your_api_key_here"
log_file = "my_log.adi"

with open(log_file, 'rb') as f:
    response = requests.post(
        'http://localhost/api/logs/upload',
        headers={'X-API-Key': api_key},
        files={'file': f}
    )

print(response.json())

ADIF Format Support

LogShackBaby fully supports ADIF 3.1.6 format files and processes all ADIF fields.

How ADIF Fields Are Processed

Core Fields (stored in dedicated database columns for fast searching/filtering):

  • QSO_DATE - Date of QSO (YYYYMMDD) [Required]
  • TIME_ON - Start time (HHMMSS) [Required]
  • CALL - Contacted station's callsign [Required]
  • BAND - Band (e.g., 20m, 2m)
  • MODE - Mode (e.g., SSB, CW, FT8)
  • FREQ - Frequency in MHz
  • RST_SENT / RST_RCVD - Signal reports
  • STATION_CALLSIGN - Your callsign
  • MY_GRIDSQUARE - Your grid square
  • GRIDSQUARE - Contacted station's grid
  • NAME - Operator name
  • QTH - Location
  • COMMENT - QSO notes
  • QSO_DATE_OFF / TIME_OFF - End date/time

All Other ADIF Fields (automatically stored in additional_fields JSON column):

The parser captures ALL fields from the ADIF 3.1.6 specification, including:

  • Station details: OPERATOR, OWNER_CALLSIGN, MY_CITY, MY_COUNTRY, etc.
  • Contest fields: CONTEST_ID, SRX, STX, PRECEDENCE, CLASS, etc.
  • QSL tracking: QSL_SENT, QSL_RCVD, LOTW_QSL_SENT, EQSL_QSL_RCVD, etc.
  • Power/Propagation: TX_PWR, PROP_MODE, SAT_NAME, ANT_AZ, etc.
  • Award tracking: AWARD_SUBMITTED, AWARD_GRANTED, CREDIT_SUBMITTED, etc.
  • Digital modes: SUBMODE, application-specific fields (N1MM, LOTW, eQSL, etc.)
  • Location data: LAT, LON, CNTY, STATE, COUNTRY, DXCC, etc.
  • And 100+ more fields...

Viewing Additional Fields

In the web interface, the "Logs" tab shows a column called "Additional" which displays:

  • A badge showing the count of additional ADIF fields captured for each QSO
  • Hover over the badge to see a tooltip with all additional field names and values
  • This makes it easy to see which QSOs have extra metadata

Example ADIF File

<ADIF_VER:5>3.1.6
<PROGRAMID:12>LogShackBaby
<EOH>

<CALL:5>W1ABC <QSO_DATE:8>20240101 <TIME_ON:6>143000 
<BAND:3>20m <MODE:3>SSB <FREQ:8>14.250000 
<RST_SENT:2>59 <RST_RCVD:2>59 
<TX_PWR:3>100 <OPERATOR:5>K1XYZ <CONTEST_ID:7>CQ-WPX
<EOR>

All fields are captured and stored, preserving the complete QSO record.

Deployment

Production Deployment with Existing NGINX

If you have an existing NGINX reverse proxy with SSL:

  1. Disable the built-in NGINX container

    Edit docker-compose.yml and comment out the nginx service.

  2. Configure your existing NGINX

    Add this location block to your NGINX config:

    location / {
        proxy_pass http://localhost:5000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        client_max_body_size 20M;
    }
  3. Ensure Docker network access

    If your NGINX is in a Docker container, add it to the logshackbaby-network:

    networks:
      logshackbaby-network:
        external: true

Security Hardening

  1. Change default passwords

    • Update DB_PASSWORD in .env
    • Generate secure SECRET_KEY
  2. Enable HTTPS

    • Configure SSL certificates in NGINX
    • Force HTTPS redirects
  3. Firewall rules

    • Only expose port 80/443 to internet
    • Keep port 5000 internal
  4. Regular backups

    # Backup database
    docker exec logshackbaby-db pg_dump -U logshackbaby logshackbaby > backup.sql
    
    # Backup with date
    docker exec logshackbaby-db pg_dump -U logshackbaby logshackbaby > backup-$(date +%Y%m%d).sql
  5. Update regularly

    docker-compose pull
    docker-compose up -d

Maintenance

View Logs

docker-compose logs -f app
docker-compose logs -f db

Restart Services

docker-compose restart app
docker-compose restart db

Update Application

git pull  # or extract new version
docker-compose down
docker-compose build --no-cache
docker-compose up -d

Database Backup

# Create backup
docker exec logshackbaby-db pg_dump -U logshackbaby logshackbaby > backup.sql

# Restore backup
docker exec -i logshackbaby-db psql -U logshackbaby logshackbaby < backup.sql

Access Database

docker exec -it logshackbaby-db psql -U logshackbaby -d logshackbaby

Clean Up Old Data

-- Delete logs older than 5 years
DELETE FROM log_entries 
WHERE uploaded_at < NOW() - INTERVAL '5 years';

-- Vacuum database
VACUUM ANALYZE;

Troubleshooting

Container won't start

# Check logs
docker-compose logs app

# Check database
docker-compose logs db

# Rebuild
docker-compose down
docker-compose build --no-cache
docker-compose up -d

Database connection error

# Wait for database to be ready
docker-compose restart app

# Check database health
docker exec logshackbaby-db pg_isready -U logshackbaby

Upload fails

  • Check file is valid ADIF format
  • Verify API key is correct
  • Check file size (max 16MB)
  • Review application logs

MFA issues

  • Ensure time is synchronized on server and client
  • TOTP codes are time-sensitive (30-second window)
  • Try codes before and after current code

Development

Run locally without Docker

# Backend
cd backend
python -m venv venv
source venv/bin/activate  # or venv\Scripts\activate on Windows
pip install -r requirements.txt

# Set environment variables
export DATABASE_URL="postgresql://logshackbaby:password@localhost:5432/logshackbaby"
export SECRET_KEY="your-secret-key"

# Initialize database
flask --app app init-db

# Run development server
python app.py

Project Structure

logshackbaby/
├── backend/              # Python Flask application
│   ├── app.py           # Main application
│   ├── models.py        # Database models
│   ├── auth.py          # Authentication utilities
│   ├── adif_parser.py   # ADIF file parser
│   ├── requirements.txt # Python dependencies
│   └── Dockerfile       # Backend container
├── frontend/            # Web interface
│   ├── index.html      # Main HTML
│   ├── css/
│   │   └── style.css   # Styles
│   └── js/
│       └── app.js      # JavaScript application
├── database/           # PostgreSQL configuration
├── nginx/              # NGINX configuration
│   ├── nginx.conf     # Reverse proxy config
│   └── ssl/           # SSL certificates
├── docker-compose.yml # Container orchestration
└── README.md         # This file

License

This project is provided as-is for amateur radio use.

Support

For issues or questions:

  1. Check the troubleshooting section
  2. Review application logs
  3. Verify configuration files

Additional Documentation

Contributing

Contributions welcome! Consider adding:

  • Additional ADIF field support
  • Log export functionality
  • Real-time contest updates
  • DXCC tracking
  • Award tracking (WAS, WAC, etc.)
  • Logbook of the World (LoTW) integration
  • QRZ/HamDB lookup integration
  • Contest templates and presets
  • Achievement badges and milestones

Amateur Radio Resources


73 de LogShackBaby Team 📻✨

Clone this wiki locally