This repository contains Monk.io template to deploy OpenClaw AI assistant gateway.
- Install Monk
- Register and Login Monk
- Add Cloud Provider (or run locally)
git clone https://github.com/monk-io/monk-openclawcd monk-openclaw
monk load MANIFEST# Generate and add gateway token
monk secrets add -g openclaw-gateway-token="$(openssl rand -hex 32)"$ monk run openclaw/gateway
✔ Starting the run job: local/openclaw/gateway... DONE
✔ Preparing nodes DONE
✔ Checking/pulling images...
✔ [================================================] 100% alpine/openclaw:2026.2.1
✔ Checking/pulling images DONE
✔ Starting containers DONE
✔ Started local/openclaw/gateway
🔩 templates/local/openclaw/gateway
└─🧊 Peer local
└─🔩 templates/local/openclaw/gateway
└─📦 openclaw-gateway running
├─🧩 alpine/openclaw:2026.2.1
├─🔌 open (public) TCP 0.0.0.0:18789 -> 18789
└─🔌 open (public) TCP 0.0.0.0:18790 -> 18790
💡 You can inspect and manage your above stack with these commands:
monk logs (-f) local/openclaw/gateway - Inspect logs
monk shell local/openclaw/gateway - Connect to the container's shell
monk do local/openclaw/gateway/action_name - Run defined action (if exists)
💡 Check monk help for more!# Load Ollama template first
monk load ollama/ollama
# Run OpenClaw + Ollama stack
monk run openclaw/stack-ollama# Load Ollama GPU template first
monk load ollama/ollama-gpu
# Run OpenClaw + Ollama GPU stack
monk run openclaw/stack-ollama-gpuAfter deploying with Ollama, pull a model:
# Pull a fast model for CPU
monk do openclaw/ollama/pull-model model=qwen2.5:0.5b
# Or pull a larger model for GPU
monk do openclaw/ollama/pull-model model=llama3.3
# List installed models
monk do openclaw/ollama/list-models| Runnable | Description |
|---|---|
openclaw/gateway |
Standalone OpenClaw gateway |
openclaw/cli |
OpenClaw CLI for management |
openclaw/stack |
Gateway process group |
openclaw/stack-ollama |
Gateway + Ollama (CPU) |
openclaw/stack-ollama-gpu |
Gateway + Ollama (GPU) |
openclaw/ollama |
Ollama CPU instance |
openclaw/ollama-gpu |
Ollama GPU instance |
# Run onboarding wizard
monk do openclaw/cli/onboard
# Login to WhatsApp
monk do openclaw/cli/channels-login
# Check channels status
monk do openclaw/cli/channels-status
# Health check
monk do openclaw/cli/healthvariables:
openclaw-image: "alpine/openclaw:2026.2.1" # Docker image
gateway-port: 18789 # Gateway HTTP port
bridge-port: 18790 # Bridge port
gateway-bind: "lan" # Bind mode (lan, loopback, public)| Secret | Required | Description |
|---|---|---|
openclaw-gateway-token |
Yes | API token for gateway access |
claude-ai-session-key |
No | Claude AI session key |
claude-web-session-key |
No | Claude Web session key |
claude-web-cookie |
No | Claude Web cookie |
Add secrets:
monk secrets add -g openclaw-gateway-token="your-token-here"The template uses qwen2.5:0.5b (CPU) or llama3.3 (GPU) by default. To use a different model:
- Pull the new model:
monk do openclaw/ollama/pull-model model=mistral- Edit
openclaw.yaml- find thegateway-ollamasection and update:
"models": [
{ "id": "mistral", "name": "Mistral 7B" },
...
]And set it as default:
"agents": {
"defaults": {
"model": {
"primary": "ollama/mistral"
}
}
}- Reload and restart:
monk load MANIFEST
monk stop openclaw/stack-ollama
monk run openclaw/stack-ollamaImportant: The model ID must match the name shown by
monk do openclaw/ollama/list-models.
OpenClaw gateway exposes a REST API on port 18789.
curl http://localhost:18789/healthcurl -H "Authorization: Bearer YOUR_TOKEN" http://localhost:18789/api/statusData is persisted under ${monk-volume-path}/openclaw:
config/- OpenClaw configurationworkspace/- Agent workspaces
Ollama models are persisted under ${monk-volume-path}/ollama.
| Model | Size | Speed | Command |
|---|---|---|---|
| qwen2.5:0.5b | 398MB | ⚡⚡⚡ Very fast | monk do openclaw/ollama/pull-model model=qwen2.5:0.5b |
| tinyllama | 637MB | ⚡⚡⚡ Very fast | monk do openclaw/ollama/pull-model model=tinyllama |
| llama3.2:1b | 1.3GB | ⚡⚡ Fast | monk do openclaw/ollama/pull-model model=llama3.2:1b |
| gemma2:2b | 1.6GB | ⚡⚡ Fast | monk do openclaw/ollama/pull-model model=gemma2:2b |
monk logs -f openclaw/gateway
monk logs -f openclaw/ollamamonk shell openclaw/gateway
monk shell openclaw/ollamacurl http://localhost:11434/api/tags# Stop stack
monk stop openclaw/stack-ollama
# Remove completely
monk purge openclaw/stack-ollama