Recommended starting point:
setup-secrets.mdis the concise, up-to-date guide for new deployments. This document is a detailed reference walkthrough — useful if you want more context on each step. For OpenClaw integration, seeopenclaw-integration.md.
This guide walks you through deploying DMAF to Google Cloud Platform using Cloud Run Jobs for cost-optimized batch processing.
Target Cost: $0-5/month within GCP free tier
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Cloud Scheduler │ ───> │ Cloud Run Job │ ───> │ Google Photos │
│ (hourly) │ │ (scan-once mode) │ │ API │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│
├─> GCS Bucket (WhatsApp media)
│
└─> Firestore (deduplication DB)
Key Features:
- Scale-to-zero: No cost when not processing
- Serverless: No server management
- Automated: Runs on schedule without manual intervention
- Free tier eligible: All services have generous free tiers
# Install gcloud CLI (if not already installed)
# See: https://cloud.google.com/sdk/docs/install
# Login to GCP
gcloud auth login
# Create a new project (or use existing)
export PROJECT_ID="dmaf-production" # Change to your project ID
gcloud projects create $PROJECT_ID --name="DMAF Production"
# Set current project
gcloud config set project $PROJECT_ID
# Enable billing (required for Cloud Run, even with free tier)
# Do this via console: https://console.cloud.google.com/billing
# Enable required APIs
gcloud services enable \
run.googleapis.com \
cloudbuild.googleapis.com \
cloudscheduler.googleapis.com \
firestore.googleapis.com \
secretmanager.googleapis.com \
storage.googleapis.com- Docker installed locally (for testing)
- OAuth credentials (
client_secret.json) from Google Cloud Console - OAuth token (
token.json) - generated by running locally first - Known people images in
data/known_people/
Firestore stores the deduplication database (which files have been processed).
# Create Firestore database (choose region close to you)
gcloud firestore databases create --location=nam5
# Verify creation
gcloud firestore databases listExpected output:
NAME CREATE_TIME
(default) 2024-01-15T10:30:00
Store OAuth credentials and tokens in Secret Manager:
# Create secret for OAuth client credentials
gcloud secrets create dmaf-oauth-client \
--data-file=client_secret.json
# Create secret for OAuth token (run app locally first to generate token.json)
gcloud secrets create dmaf-oauth-token \
--data-file=token.json
# Create secret for config file
gcloud secrets create dmaf-config \
--data-file=config.yaml
# Verify secrets
gcloud secrets listNote on Email Alerts: If you want to enable email notifications for borderline recognitions and errors, update your config.yaml with SMTP settings before creating the dmaf-config secret. See the Email Alerts section below for setup instructions.
Email alerts notify you about borderline recognitions, processing errors, and training updates even when DMAF runs in the cloud.
Why SendGrid for Cloud Deployment:
- ✅ Free tier: 100 emails/day (plenty for DMAF)
- ✅ No personal email risk
- ✅ API keys (not passwords)
- ✅ Perfect for serverless/cloud apps
Quick Setup (5 minutes):
-
Create SendGrid account: SendGrid Signup
-
Verify sender email: Settings → Sender Authentication → Verify Single Sender
-
Create API key: Settings → API Keys → Create with "Mail Send" permission
-
Add to config.yaml:
alerting: enabled: true recipients: - "your-email@gmail.com" batch_interval_minutes: 60 borderline_offset: 0.1 event_retention_days: 90 smtp: host: "smtp.sendgrid.net" port: 587 username: "apikey" password: "SG.your-api-key-here" # Your SendGrid API Key use_tls: true sender_email: "your-verified-email@gmail.com"
-
Update the secret:
# Update config secret with email settings gcloud secrets versions add dmaf-config \ --data-file=config.yaml
Alternative: For Gmail, Mailgun, AWS SES, or other providers, see the detailed DEPLOYMENT.md email setup guide.
This bucket will store WhatsApp media synced from your phone.
# Create bucket (choose region matching Firestore)
gsutil mb -l us-central1 gs://$PROJECT_ID-whatsapp-media
# Set lifecycle rule to auto-delete after processing (optional)
cat > lifecycle.json << EOF
{
"lifecycle": {
"rule": [
{
"action": {"type": "Delete"},
"condition": {
"age": 30,
"matchesPrefix": ["processed/"]
}
}
]
}
}
EOF
gsutil lifecycle set lifecycle.json gs://$PROJECT_ID-whatsapp-media# Submit build to Cloud Build (uses cloudbuild.yaml)
gcloud builds submit --config cloudbuild.yaml
# Verify image
gcloud container images list --repository=gcr.io/$PROJECT_ID# Build image locally
docker build -t gcr.io/$PROJECT_ID/dmaf:latest .
# Authenticate Docker with GCR
gcloud auth configure-docker
# Push image
docker push gcr.io/$PROJECT_ID/dmaf:latestExpected build time: 3-5 minutes (includes InsightFace model download ~600MB)
Cloud Run Jobs execute the batch processing on a schedule.
# Create service account for the job
gcloud iam service-accounts create dmaf-runner \
--display-name="DMAF Cloud Run Job Service Account"
export SA_EMAIL="dmaf-runner@$PROJECT_ID.iam.gserviceaccount.com"
# Grant necessary permissions
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$SA_EMAIL" \
--role="roles/datastore.user"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:$SA_EMAIL" \
--role="roles/storage.objectViewer"
gcloud secrets add-iam-policy-binding dmaf-oauth-client \
--member="serviceAccount:$SA_EMAIL" \
--role="roles/secretmanager.secretAccessor"
gcloud secrets add-iam-policy-binding dmaf-oauth-token \
--member="serviceAccount:$SA_EMAIL" \
--role="roles/secretmanager.secretAccessor"
gcloud secrets add-iam-policy-binding dmaf-config \
--member="serviceAccount:$SA_EMAIL" \
--role="roles/secretmanager.secretAccessor"
# Create Cloud Run Job
gcloud run jobs create dmaf-scan \
--image=gcr.io/$PROJECT_ID/dmaf:latest \
--region=us-central1 \
--service-account=$SA_EMAIL \
--memory=2Gi \
--cpu=1 \
--max-retries=1 \
--task-timeout=15m \
--set-secrets="/secrets/client_secret.json=dmaf-oauth-client:latest" \
--set-secrets="/secrets/token.json=dmaf-oauth-token:latest" \
--set-secrets="/config/config.yaml=dmaf-config:latest" \
--args="--scan-once,--config,/config/config.yaml"Parameters explained:
--memory=2Gi: 2GB RAM (required for InsightFace model loading)--cpu=1: 1 vCPU (adequate for batch processing)--max-retries=1: Retry once if job fails--task-timeout=15m: Maximum runtime (adjust based on media volume)
Before scheduling, test the job manually:
# Execute job manually
gcloud run jobs execute dmaf-scan --region=us-central1
# Follow execution in real-time
gcloud logging read "resource.type=cloud_run_job" \
--limit=50 \
--format="table(timestamp, textPayload)" \
--freshness=5mExpected log output:
2024-01-15 10:30:00 INFO - Using face recognition backend: insightface
2024-01-15 10:30:01 INFO - Using Firestore: project=dmaf-production, collection=dmaf_files
2024-01-15 10:30:02 INFO - Running in batch mode (scan-once)
2024-01-15 10:30:05 INFO - Batch scan complete: 5 new, 5 processed, 3 matched, 3 uploaded, 0 errors
Automate execution with Cloud Scheduler:
# Create scheduler job (runs hourly)
gcloud scheduler jobs create http dmaf-schedule \
--location=us-central1 \
--schedule="0 * * * *" \
--uri="https://us-central1-run.googleapis.com/apis/run.googleapis.com/v1/namespaces/$PROJECT_ID/jobs/dmaf-scan:run" \
--http-method=POST \
--oauth-service-account-email=$SA_EMAIL
# Test scheduler immediately
gcloud scheduler jobs run dmaf-schedule --location=us-central1
# Check scheduler status
gcloud scheduler jobs describe dmaf-schedule --location=us-central1Schedule options:
0 * * * *- Every hour (recommended - stays in free tier)*/15 * * * *- Every 15 minutes (faster, ~$1-3/month overage)0 * * * *- Every hour0 0 * * *- Once daily at midnight
Reference photos are stored in a private GCS bucket and downloaded by DMAF at container startup.
gsutil mb -p $PROJECT_ID -l us-central1 gs://$PROJECT_ID-known-peoplegsutil -m rsync -r -x ".*Zone\.Identifier$" data/known_people/ gs://$PROJECT_ID-known-people/gsutil iam ch serviceAccount:dmaf-runner@$PROJECT_ID.iam.gserviceaccount.com:objectViewer gs://$PROJECT_ID-known-peopleknown_people_gcs_uri: "gs://your-project-known-people"Then update the config secret:
gcloud secrets versions add dmaf-config --data-file=config.cloud.yamlTo update reference photos later, just re-run the gsutil rsync command — no Docker rebuild needed.
You need a method to sync WhatsApp media from your phone to the GCS bucket.
Best for: iPhone users, users already running OpenClaw, zero-maintenance setup
If you use OpenClaw as a personal AI assistant with WhatsApp connected, it can silently collect group media and sync it to GCS — no WhatsApp Desktop or rclone needed.
See the full guide: OpenClaw Integration
Quick summary:
- Configure OpenClaw to accept group messages with
requireMention: true - Media auto-downloads to
~/.openclaw/media/inbound/ - A cron script uploads images to GCS every 30 minutes
- DMAF Cloud Run job processes them on schedule
Note: Only captures photos from other people in groups (your own sent photos are on your phone already).
Best for: Both iOS and Android users, cleanest setup with zero duplicates
✅ WHY THIS IS THE BEST OPTION:
- ✅ Only WhatsApp media - No personal photos, screenshots, or camera roll mixed in
- ✅ Zero duplicates - WhatsApp Desktop is separate from phone's photo backup
- ✅ Cross-platform - Works identically for iOS and Android via multi-device linking
- ✅ Survives phone transitions - Desktop stays configured when you switch phones
- ✅ 95% completeness - Gets media from all chats (chats must be opened occasionally)
- ✅ Simple phone setup - Just link WhatsApp Desktop, no extra apps needed
Architecture:
Phone (iOS/Android) → WhatsApp Multi-Device → Desktop (Mac/PC) → rclone → GCS Bucket → DMAF Cloud Run
On Mac:
# Download from Mac App Store or official site
# https://www.whatsapp.com/downloadOn Windows:
# Download from Microsoft Store or official site
# https://www.whatsapp.com/downloadOn Linux:
# Snap package (Ubuntu/Debian)
sudo snap install whatsapp-for-linux
# Or download AppImage from official siteiOS or Android:
- Open WhatsApp on your phone
- Tap Settings → Linked Devices (or WhatsApp Web/Desktop)
- Tap Link a Device
- Scan QR code shown on Desktop app
- ✅ Desktop is now linked (works even when phone is offline!)
Enable auto-download in WhatsApp Desktop:
- Open WhatsApp Desktop
- Settings → Storage
- Enable "Automatically download new photos and videos"
- Optional: Set size limit (e.g., 50MB per file)
Media storage locations:
- Mac:
~/Library/Containers/WhatsApp/Data/Library/Application Support/WhatsApp/Media/ - Windows (Store):
%LOCALAPPDATA%\Packages\5319275A.WhatsAppDesktop_cv1g1gnamwj4y\LocalState\shared\transfers\ - Windows (Web install):
%LOCALAPPDATA%\WhatsApp\or%APPDATA%\WhatsApp\
Install rclone:
Mac:
brew install rcloneWindows:
# Download installer from https://rclone.org/downloads/
# Or use chocolatey:
choco install rcloneLinux:
curl https://rclone.org/install.sh | sudo bashConfigure GCS remote:
rclone config
# Choose: n (new remote)
# Name: gcs
# Storage: google cloud storage
# Project ID: your-project-id
# Service account: Use service account created earlier (dmaf-runner)
# Upload service account key JSON
# Test connection
rclone lsd gcs:Test sync:
Mac:
# Dry run first
rclone sync "$HOME/Library/Containers/WhatsApp/Data/Library/Application Support/WhatsApp/Media/" \
gcs:your-project-id-whatsapp-media/ \
--dry-run
# If looks good, run for real
rclone sync "$HOME/Library/Containers/WhatsApp/Data/Library/Application Support/WhatsApp/Media/" \
gcs:your-project-id-whatsapp-media/Windows:
# Dry run first
rclone sync "%LOCALAPPDATA%\Packages\5319275A.WhatsAppDesktop_cv1g1gnamwj4y\LocalState\shared\transfers\" gcs:your-project-id-whatsapp-media/ --dry-run
# If looks good, run for real
rclone sync "%LOCALAPPDATA%\Packages\5319275A.WhatsAppDesktop_cv1g1gnamwj4y\LocalState\shared\transfers\" gcs:your-project-id-whatsapp-media/Mac - Using launchd (runs hourly):
Create launch agent:
mkdir -p ~/Library/LaunchAgents
cat > ~/Library/LaunchAgents/com.dmaf.whatsapp-sync.plist << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.dmaf.whatsapp-sync</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/rclone</string>
<string>sync</string>
<string>/Users/YOUR_USERNAME/Library/Containers/WhatsApp/Data/Library/Application Support/WhatsApp/Media/</string>
<string>gcs:YOUR_PROJECT_ID-whatsapp-media/</string>
<string>--log-file</string>
<string>/Users/YOUR_USERNAME/rclone-sync.log</string>
<string>--log-level</string>
<string>INFO</string>
</array>
<key>StartInterval</key>
<integer>3600</integer>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
EOF
# Replace YOUR_USERNAME and YOUR_PROJECT_ID in the file above
# Load the launch agent
launchctl load ~/Library/LaunchAgents/com.dmaf.whatsapp-sync.plist
# Check status
launchctl list | grep dmafWindows - Using Task Scheduler (runs hourly):
Create sync script:
# Create sync-whatsapp-to-gcs.ps1
$script = @'
$WHATSAPP_PATH = "$env:LOCALAPPDATA\Packages\5319275A.WhatsAppDesktop_cv1g1gnamwj4y\LocalState\shared\transfers\"
$PROJECT_ID = "your-project-id"
rclone sync $WHATSAPP_PATH gcs:$PROJECT_ID-whatsapp-media/ `
--log-file "$HOME\rclone-sync.log" `
--log-level INFO `
--transfers 4 `
--checkers 8
Write-Output "Sync completed at $(Get-Date)"
'@
$script | Out-File -FilePath "$HOME\sync-whatsapp-to-gcs.ps1" -Encoding UTF8Create scheduled task:
$action = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument "-File $HOME\sync-whatsapp-to-gcs.ps1"
$trigger = New-ScheduledTaskTrigger -Once -At (Get-Date) -RepetitionInterval (New-TimeSpan -Hours 1)
$settings = New-ScheduledTaskSettingsSet -AllowStartIfOnBatteries -DontStopIfGoingOnBatteries -StartWhenAvailable
Register-ScheduledTask -Action $action -Trigger $trigger -Settings $settings -TaskName "DMAF WhatsApp Sync" -Description "Sync WhatsApp Desktop media to GCS for DMAF processing"Linux - Using cron:
crontab -e
# Add this line (adjust path to your WhatsApp installation):
0 * * * * rclone sync ~/.local/share/whatsapp-for-linux/MediaCache/ gcs:your-project-id-whatsapp-media/ --log-file ~/rclone-sync.logCRITICAL: Configure your phone to prevent WhatsApp photos from appearing in camera roll backup:
iOS:
- WhatsApp → Settings → Chats → "Save to Camera Roll" → OFF
- This ensures WhatsApp media ONLY goes through Desktop → DMAF pipeline
- Your regular camera photos still backup to iCloud/Google Photos normally
Android:
- WhatsApp → Settings → Chats → "Media visibility" → OFF
- This prevents WhatsApp images from appearing in Gallery
- Your regular camera photos still backup to Google Photos normally
Result: Complete separation:
- 📷 Camera photos → Phone backup (iCloud/Google Photos) → Your main library ✅
- 💬 WhatsApp media → Desktop → rclone → GCS → DMAF filter → Curated Google Photos ✅
- 🚫 Zero duplicates! ✅
Update your config.yaml:
# In your config.yaml for Cloud Run
watch_dirs:
- "gs://your-project-id-whatsapp-media" # GCS bucket path
# Delete matched photos after upload to Google Photos
# Safe to enable - these are WhatsApp photos that already exist in chats
delete_source_after_upload: true
# Delete unmatched photos (optional)
# Safe for WhatsApp-only folder (no personal photos mixed in)
delete_unmatched_after_processing: false # Set to true if you want to auto-cleanup non-matches- Send a test photo in WhatsApp (with your face)
- Check WhatsApp Desktop - photo should appear in chat
- Check storage location - photo should be in Media folder
- Wait for rclone sync (or trigger manually)
- Check GCS bucket:
gsutil ls gs://your-project-id-whatsapp-media/
- Wait for Cloud Scheduler (or trigger manually):
gcloud run jobs execute dmaf-processor --region us-central1 - Check Google Photos - photo should appear if face matched!
Completeness (~95%):
- WhatsApp Desktop downloads media from active conversations
- Chats must be opened occasionally for media to sync
- Very old/archived chats may not download media until opened
- Solution: Scroll through WhatsApp Desktop occasionally to trigger downloads
Storage Management:
- WhatsApp Desktop cache can grow large (10-50GB over time)
- Clear old media:
- Mac: Delete contents of Media folder (keeps recent chats)
- Windows: Delete contents of transfers folder
- rclone will re-sync only new files
Desktop Must Stay On:
- Mac/PC must be powered on and connected to internet for sync
- Options:
- Leave desktop on 24/7 (laptop closed, external display)
- Use a dedicated small PC/NUC as sync server
- Use Wake-on-LAN to schedule wake + sync + sleep
- WhatsApp Desktop: Free
- rclone: Free
- Desktop electricity: ~$1-2/month (if running 24/7)
- GCS storage: ~$0.50-2/month (with auto-cleanup)
- Total: $1.50-4/month
- Install FolderSync Pro ($5 one-time)
- Create GCP service account key:
gcloud iam service-accounts keys create sync-key.json \ --iam-account=$SA_EMAIL - Configure in FolderSync:
- Account type: Google Cloud Storage
- Upload service account key
- Bucket:
$PROJECT_ID-whatsapp-media - Local folder:
/storage/emulated/0/WhatsApp/Media/WhatsApp Images/ - Sync type: To remote only
- Schedule: Every hour (match Cloud Scheduler frequency)
- Install Termux from F-Droid
- Set up rclone:
# In Termux pkg install rclone termux-setup-storage # Configure rclone for GCS (interactive) rclone config # Sync script cat > ~/sync-whatsapp.sh << 'EOF' #!/data/data/com.termux/files/usr/bin/bash rclone sync ~/storage/dcim/WhatsApp/WhatsApp\ Images/ \ gcs:$PROJECT_ID-whatsapp-media/ \ --log-file ~/rclone.log EOF chmod +x ~/sync-whatsapp.sh # Add to crontab crontab -e # Add: 0 * * * * ~/sync-whatsapp.sh
- Install Syncthing on phone and a VM/server
- Sync WhatsApp folder to VM
- Use
gsutil rsyncon VM to push to GCS:gsutil -m rsync -r /path/to/synced/WhatsApp/ \ gs://$PROJECT_ID-whatsapp-media/
# Get billing account ID
gcloud billing accounts list
export BILLING_ACCOUNT="XXXXXX-YYYYYY-ZZZZZZ"
# Create budget alert
gcloud billing budgets create \
--billing-account=$BILLING_ACCOUNT \
--display-name="DMAF Monthly Budget" \
--budget-amount=10USD \
--threshold-rule=percent=50 \
--threshold-rule=percent=90 \
--threshold-rule=percent=100# Check Cloud Run job executions
gcloud run jobs executions list \
--job=dmaf-scan \
--region=us-central1 \
--limit=10
# Check Firestore usage
gcloud firestore operations list
# Estimate costs
gcloud billing accounts get-iam-policy $BILLING_ACCOUNT| Service | Free Tier | Expected Usage | Cost |
|---|---|---|---|
| Cloud Run Jobs | 180K vCPU-sec, 360K GiB-sec/mo | ~240 executions/mo (3-hourly) | $0 |
| Cloud Scheduler | 3 free jobs | 1 job | $0 |
| Firestore | 1GB storage, 50K reads/day | <100MB, ~1K reads/day | $0 |
| Secret Manager | 6 active secrets | 3 secrets | $0 |
| GCS | 5GB storage (US regions) | <1GB | $0 |
| Artifact Registry | 0.5GB free | ~0.7GB image | ~$0.02/mo |
| Total | $0-5/mo |
Symptom: Job logs show "403 Forbidden" or "Permission denied", or Cloud Scheduler shows PERMISSION_DENIED status.
Common cause: The Cloud Scheduler service account lacks roles/run.invoker on the Cloud Run job. This is needed for the scheduler to trigger job executions.
Solution:
# Grant invoker permission to the service account
gcloud run jobs add-iam-policy-binding dmaf-scan \
--region=us-central1 \
--member="serviceAccount:dmaf-runner@dmaf-production.iam.gserviceaccount.com" \
--role="roles/run.invoker"
# Verify with a manual execution
gcloud run jobs execute dmaf-scan --region=us-central1Also verify the service account has the required project-level roles:
gcloud projects get-iam-policy $PROJECT_ID \
--flatten="bindings[].members" \
--filter="bindings.members:$SA_EMAIL"Required roles: roles/run.invoker, roles/datastore.user, roles/storage.objectViewer, roles/secretmanager.secretAccessor.
Symptom: Job exceeds 15 minute timeout
Solution: Increase timeout or reduce processing:
gcloud run jobs update dmaf-scan \
--region=us-central1 \
--task-timeout=30mSymptom: "Unable to download buffalo_l model"
Solution: Model is pre-downloaded in Docker image. Verify image:
docker run gcr.io/$PROJECT_ID/dmaf:latest \
python -c "from insightface.app import FaceAnalysis; FaceAnalysis(name='buffalo_l')"Symptom: "Could not connect to Firestore"
Solution: Verify Firestore is created and service account has access:
gcloud firestore databases list
gcloud projects get-iam-policy $PROJECT_ID | grep datastore.userSymptom: Job runs but finds no face matches (0 uploaded)
Solution:
- Verify
known_people_gcs_uriis set in yourconfig.cloud.yaml - Verify GCS bucket has reference photos:
gsutil ls gs://$PROJECT_ID-known-people/ - Verify reference photos are organized as subdirectories per person (e.g.,
Alice/photo1.jpg) - Check logs for "Downloaded N known_people" message at startup
- Verify GCS media bucket has images:
gsutil ls gs://$PROJECT_ID-whatsapp-media/ - Check face recognition matches in logs
Symptom: Job runs successfully but no images uploaded
Solution:
- Verify GCS bucket has images:
gsutil ls gs://$PROJECT_ID-whatsapp-media/ - Check face recognition matches in logs
- Verify known_people directory is populated in config
Symptom: Logs show warnings about missing Firestore composite indexes
Example:
WARNING - Could not cleanup borderline events:
The query requires a composite index that is not yet available...
Why This Happens:
- Firestore composite indexes are needed for efficient cleanup of old alert events
- The cleanup operation uses queries on
(alerted, created_ts)fields - Indexes must be created manually or via
gcloud firestore indexes create
Impact: ✅ Non-Critical - App handles gracefully
- Core functionality (face recognition, upload) works perfectly
- Alert batching and sending works perfectly
- Only affects cleanup of old alert event records (90+ days old)
- The cleanup simply skips if indexes don't exist
- No data loss, no errors visible to users
Solution (Optional): If you want to enable cleanup of old events, create the indexes:
# For borderline events
gcloud firestore indexes composite create \
--collection-group=borderline_events \
--field-config field-path=alerted,order=ascending \
--field-config field-path=created_ts,order=ascending
# For error events
gcloud firestore indexes composite create \
--collection-group=error_events \
--field-config field-path=alerted,order=ascending \
--field-config field-path=created_ts,order=ascendingRecommended: Skip index creation unless your Firestore storage exceeds 100MB from event records.
How It Works:
- Cloud Run containers have read-only filesystems for security
- When OAuth tokens expire, Google APIs automatically refresh them in-memory
- The refresh is transparent and doesn't require writing to disk
- Tokens are valid for ~1 hour; each Cloud Run execution refreshes as needed
Why token.json Never Updates:
- The code attempts to save refreshed tokens to disk (for local development compatibility)
- In Cloud Run, this write fails silently with a debug log message
- The in-memory refresh still works perfectly
- This is by design and doesn't affect functionality
When to Update token.json Secret: Only if you see "Token has been expired or revoked" errors:
# Generate fresh token locally
python -m dmaf --config config.yaml # Run once locally to refresh
# Update secret
gcloud secrets versions add dmaf-oauth-token --data-file=token.jsonBest Practice: Tokens typically last 6-12 months before requiring manual refresh.
# Make code changes
git commit -am "Update feature X"
# Rebuild and push image
gcloud builds submit --config cloudbuild.yaml
# Cloud Run Job automatically uses :latest tag
# No manual update needed if using :latest# Update config.yaml locally
# Update secret
gcloud secrets versions add dmaf-config \
--data-file=config.yaml
# Verify new version is used (restart may be needed)
gcloud run jobs describe dmaf-scan --region=us-central1# Update OAuth token (if refreshed)
gcloud secrets versions add dmaf-oauth-token \
--data-file=token.json
# Verify
gcloud secrets versions list dmaf-oauth-tokenTo completely remove the deployment:
# Delete Cloud Run job
gcloud run jobs delete dmaf-scan --region=us-central1
# Delete Cloud Scheduler job
gcloud scheduler jobs delete dmaf-schedule --location=us-central1
# Delete secrets
gcloud secrets delete dmaf-oauth-client
gcloud secrets delete dmaf-oauth-token
gcloud secrets delete dmaf-config
# Delete GCS bucket
gsutil -m rm -r gs://$PROJECT_ID-whatsapp-media
# Delete Firestore database (careful - this deletes all data!)
gcloud firestore databases delete --database='(default)'
# Delete service account
gcloud iam service-accounts delete $SA_EMAIL
# Delete project (if dedicated project)
gcloud projects delete $PROJECT_IDIf $5/month is still too much:
- Run DMAF in watcher mode on Raspberry Pi
- One-time cost: ~$50 for Pi 4
- Power consumption: ~$1-2/month
- 2 free AMD VMs (1/8 OCPU, 1GB RAM each)
- Run DMAF in Docker container
- Truly free forever (no credit card expiry)
- 3 shared-CPU VMs (256MB RAM each)
- Run DMAF as scheduled task
- Free tier includes 160GB bandwidth
For issues or questions:
- GitHub Issues: https://github.com/yourusername/wa_automate/issues
- Documentation: https://github.com/yourusername/wa_automate/blob/main/README.md
- Rotate OAuth tokens regularly: Generate new tokens every 6 months
- Use Secret Manager: Never commit secrets to git
- Restrict service account: Grant minimum required permissions
- Enable VPC Service Controls: For production deployments
- Use private GCS buckets: Don't make WhatsApp media public
- Audit logs: Enable Cloud Audit Logs for compliance