Local OpenAI-compatible AI gateway for 🦞 OpenClaw and other AI-native clients.
FoundryGate gives OpenClaw, n8n, CLI tools, and custom apps one local endpoint and routes each request to the best configured provider or local worker. It keeps routing, fallback, onboarding, and operator visibility under your control instead of scattering provider logic across every client.
- Quickstart
- Why FoundryGate
- How It Works
- API Surface
- How FoundryGate Compares
- Deployment
- More Resources
- Community And Security
- Single local endpoint for many upstreams: cloud providers, proxy providers, and local workers can sit behind the same base URL.
- OpenAI-compatible runtime: chat completions, model discovery, image generation, and image editing use familiar OpenAI-style paths.
- Better routing than simple first-match proxying: policies, static rules, heuristics, client profiles, hooks, and route-fit scoring all participate.
- Strong operator visibility:
/health, provider inventory, route previews, traces, stats, update checks, and dashboard views are built in, including per-client usage highlights. - Practical rollout controls: fallback chains, maintenance windows, rollout rings, provider scopes, and post-update verification gates are already there.
- Copy/paste onboarding: OpenClaw, n8n, CLI, delegated-agent traffic, provider templates, and env starter files ship with the repo.
The fastest local path is the helper-driven bootstrap.
git clone https://github.com/typelicious/FoundryGate.git foundrygate
cd foundrygate
cp .env.example .env
./scripts/foundrygate-bootstrap
$EDITOR .env
./scripts/foundrygate-doctor
python3 -m venv .venv
. .venv/bin/activate
pip install -r requirements.txt
python -m foundrygateIn another terminal:
curl -fsS http://127.0.0.1:8090/health
curl -fsS http://127.0.0.1:8090/v1/modelsThen use the onboarding helpers to move from “the server starts” to “real clients are ready”:
./scripts/foundrygate-onboarding-report
./scripts/foundrygate-onboarding-validateIf you prefer a packaged or service-driven install, jump to Deployment or the fuller Operations guide.
Client (OpenClaw, n8n, CLI, custom app)
|
v
http://127.0.0.1:8090/v1
|
+--> policy rules
+--> static rules
+--> heuristic rules
+--> optional request hooks
+--> optional client profile defaults
+--> optional LLM classifier
|
+--> provider selection and fallback
|- cloud APIs
|- proxy providers
`- local workers
Routing is layered on purpose:
- Policies can enforce locality, capability, cost, or compliance preferences.
- Static and heuristic rules catch known patterns without needing a classifier call.
- Request hooks can inject bounded routing hints before the final decision.
- Client profiles give OpenClaw, n8n, CLI tools, and custom apps different safe defaults.
- Provider scoring considers health, latency, context headroom, token limits, cache hints, and recent failures.
For OpenClaw specifically, both one-agent and many-agent traffic can use the same endpoint. FoundryGate can distinguish delegated traffic through request headers such as x-openclaw-source when they are present.
FoundryGate keeps the primary surface compact and OpenAI-compatible. The full endpoint reference lives in docs/API.md.
| Endpoint | Purpose |
|---|---|
GET /health |
Service health, provider status, and capability coverage |
GET /v1/models |
OpenAI-compatible model list |
POST /v1/chat/completions |
OpenAI-compatible chat routing |
POST /v1/images/generations |
OpenAI-compatible image generation |
POST /v1/images/edits |
OpenAI-compatible image editing |
POST /api/route |
Chat routing dry-run with decision details |
POST /api/route/image |
Image routing dry-run |
GET /api/providers |
Provider inventory and filterable coverage view |
GET /api/update |
Update status, guardrails, and rollout advice |
Quick checks:
curl -fsS http://127.0.0.1:8090/health
curl -fsS http://127.0.0.1:8090/v1/models
curl -fsS http://127.0.0.1:8090/v1/chat/completions \
-H 'Content-Type: application/json' \
-d '{
"model": "auto",
"messages": [
{"role": "user", "content": "Summarize why a local AI gateway is useful."}
]
}'The useful comparison is not “router vs router”, but how much routing and operator burden each approach leaves with you.
| Capability | Direct provider wiring | Hosted remote router | FoundryGate |
|---|---|---|---|
| One local endpoint for many clients | No | Varies | Yes |
| Local workers and cloud providers in one route set | Manual | Varies | Yes |
| Policy routing, client profiles, and hooks | Manual | Varies | Yes |
| Operator-owned health, traces, and update controls | Partial | Varies | Yes |
| Can stay fully under local operator control | Yes | Varies | Yes |
| Copy/paste onboarding for OpenClaw, n8n, and CLI tools | Manual | Varies | Yes |
FoundryGate is a local-first gateway. That means you can keep traffic, fallback policy, rollout controls, and provider selection logic close to the clients that actually depend on them.
FoundryGate can stay small in development and still scale into a more repeatable operator setup:
- Local Python run: quickest path for development and testing.
systemdon Linux: recommended for long-running generic host installs.- Docker and GHCR path: tagged releases build container artifacts through the release workflow.
- Python package path: release workflows build
sdistandwheel. - Separate npm CLI package:
packages/foundrygate-cligives CLI-facing environments a small Node entry point without changing the Python service runtime.
Start here for the deeper deployment details:
- Architecture
- AI-native client matrix
- API reference
- Configuration reference
- Operations guide
- Integrations
- Onboarding
- Examples
- First-wave AI-native starters
- Second-wave AI-native starters
- Third-wave AI-native starters
- Security review for
v1.0.0 - Publishing
- Troubleshooting
- Roadmap
- Releases
FoundryGate ships with repo-safety checks for .ssh/, *.db*, *.sqlite*, and *.log, plus CodeQL, Dependabot, secret scanning, and documented release review steps.
Apache-2.0. See LICENSE.
⭐ If FoundryGate saves you time or money, feel free to star the repo. ❤️