Skip to content

OmniRoute is an AI gateway for multi-provider LLMs: an OpenAI-compatible endpoint with smart routing, load balancing, retries, and fallbacks. Add policies, rate limits, caching, and observability for reliable, cost-aware inference.

License

Notifications You must be signed in to change notification settings

diegosouzapw/OmniRoute

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

94 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OmniRoute Dashboard

🚀 OmniRoute — The Free AI Gateway

Never stop coding. Smart routing to FREE & low-cost AI models with automatic fallback.

Your universal API proxy — one endpoint, 36+ providers, zero downtime.

Chat Completions • Embeddings • Image Generation • Audio • Reranking • 100% TypeScript


🤖 Free AI Provider for your favorite coding agents

Connect any AI-powered IDE or CLI tool through OmniRoute — free API gateway for unlimited coding.

OpenClaw
OpenClaw

⭐ 205K
NanoBot
NanoBot

⭐ 20.9K
PicoClaw
PicoClaw

⭐ 14.6K
ZeroClaw
ZeroClaw

⭐ 9.9K
IronClaw
IronClaw

⭐ 2.1K
OpenCode
OpenCode

⭐ 106K
Codex CLI
Codex CLI

⭐ 60.8K
Claude Code
Claude Code

⭐ 67.3K
Gemini CLI
Gemini CLI

⭐ 94.7K
Kilo Code
Kilo Code

⭐ 15.5K

📡 All agents connect via http://localhost:20128/v1 or http://cloud.omniroute.online/v1 — one config, unlimited models and quota


npm version Docker Hub License Website WhatsApp

🌐 Website🚀 Quick Start💡 Features📖 Docs💰 Pricing💬 WhatsApp

🌐 Available in: English | Português | Español | Русский | 中文 | Deutsch | Français | Italiano


🤔 Why OmniRoute?

Stop wasting money and hitting limits:

  • Subscription quota expires unused every month
  • Rate limits stop you mid-coding
  • Expensive APIs ($20-50/month per provider)
  • Manual switching between providers

OmniRoute solves this:

  • Maximize subscriptions - Track quota, use every bit before reset
  • Auto fallback - Subscription → API Key → Cheap → Free, zero downtime
  • Multi-account - Round-robin between accounts per provider
  • Universal - Works with Claude Code, Codex, Gemini CLI, Cursor, Cline, OpenClaw, any CLI tool

🔄 How It Works

┌─────────────┐
│  Your CLI   │  (Claude Code, Codex, Gemini CLI, OpenClaw, Cursor, Cline...)
│   Tool      │
└──────┬──────┘
       │ http://localhost:20128/v1
       ↓
┌─────────────────────────────────────────┐
│           OmniRoute (Smart Router)        │
│  • Format translation (OpenAI ↔ Claude) │
│  • Quota tracking + Embeddings + Images │
│  • Auto token refresh                   │
└──────┬──────────────────────────────────┘
       │
       ├─→ [Tier 1: SUBSCRIPTION] Claude Code, Codex, Gemini CLI
       │   ↓ quota exhausted
       ├─→ [Tier 2: API KEY] DeepSeek, Groq, xAI, Mistral, NVIDIA NIM, etc.
       │   ↓ budget limit
       ├─→ [Tier 3: CHEAP] GLM ($0.6/1M), MiniMax ($0.2/1M)
       │   ↓ budget limit
       └─→ [Tier 4: FREE] iFlow, Qwen, Kiro (unlimited)

Result: Never stop coding, minimal cost

⚡ Quick Start

1. Install globally:

npm install -g omniroute
omniroute

🎉 Dashboard opens at http://localhost:20128

Command Description
omniroute Start server (default port 20128)
omniroute --port 3000 Use custom port
omniroute --no-open Don't auto-open browser
omniroute --help Show help

2. Connect a FREE provider:

Dashboard → Providers → Connect Claude Code or Antigravity → OAuth login → Done!

3. Use in your CLI tool:

Claude Code/Codex/Gemini CLI/OpenClaw/Cursor/Cline Settings:
  Endpoint: http://localhost:20128/v1
  API Key: [copy from dashboard]
  Model: if/kimi-k2-thinking

That's it! Start coding with FREE AI models.

Alternative — run from source:

cp .env.example .env
npm install
PORT=20128 NEXT_PUBLIC_BASE_URL=http://localhost:20128 npm run dev

🐳 Docker

OmniRoute is available as a public Docker image on Docker Hub.

Quick run:

docker run -d \
  --name omniroute \
  --restart unless-stopped \
  -p 20128:20128 \
  -v omniroute-data:/app/data \
  diegosouzapw/omniroute:latest

With environment file:

# Copy and edit .env first
cp .env.example .env

docker run -d \
  --name omniroute \
  --restart unless-stopped \
  --env-file .env \
  -p 20128:20128 \
  -v omniroute-data:/app/data \
  diegosouzapw/omniroute:latest

Using Docker Compose:

# Base profile (no CLI tools)
docker compose --profile base up -d

# CLI profile (Claude Code, Codex, OpenClaw built-in)
docker compose --profile cli up -d
Image Tag Size Description
diegosouzapw/omniroute latest ~250MB Latest stable release
diegosouzapw/omniroute 1.0.3 ~250MB Current version

💰 Pricing at a Glance

Tier Provider Cost Quota Reset Best For
💳 SUBSCRIPTION Claude Code (Pro) $20/mo 5h + weekly Already subscribed
Codex (Plus/Pro) $20-200/mo 5h + weekly OpenAI users
Gemini CLI FREE 180K/mo + 1K/day Everyone!
GitHub Copilot $10-19/mo Monthly GitHub users
🔑 API KEY NVIDIA NIM FREE (1000 credits) One-time Free tier testing
DeepSeek Pay-per-use None Best price/quality
Groq Free tier + paid Rate limited Ultra-fast inference
xAI (Grok) Pay-per-use None Grok models
Mistral Free tier + paid Rate limited European AI
OpenRouter Pay-per-use None 100+ models
💰 CHEAP GLM-4.7 $0.6/1M Daily 10AM Budget backup
MiniMax M2.1 $0.2/1M 5-hour rolling Cheapest option
Kimi K2 $9/mo flat 10M tokens/mo Predictable cost
🆓 FREE iFlow $0 Unlimited 8 models free
Qwen $0 Unlimited 3 models free
Kiro $0 Unlimited Claude free

💡 Pro Tip: Start with Gemini CLI (180K free/month) + iFlow (unlimited free) combo = $0 cost!


🎯 Use Cases

Case 1: "I have Claude Pro subscription"

Problem: Quota expires unused, rate limits during heavy coding

Combo: "maximize-claude"
  1. cc/claude-opus-4-6        (use subscription fully)
  2. glm/glm-4.7               (cheap backup when quota out)
  3. if/kimi-k2-thinking       (free emergency fallback)

Monthly cost: $20 (subscription) + ~$5 (backup) = $25 total
vs. $20 + hitting limits = frustration

Case 2: "I want zero cost"

Problem: Can't afford subscriptions, need reliable AI coding

Combo: "free-forever"
  1. gc/gemini-3-flash         (180K free/month)
  2. if/kimi-k2-thinking       (unlimited free)
  3. qw/qwen3-coder-plus       (unlimited free)

Monthly cost: $0
Quality: Production-ready models

Case 3: "I need 24/7 coding, no interruptions"

Problem: Deadlines, can't afford downtime

Combo: "always-on"
  1. cc/claude-opus-4-6        (best quality)
  2. cx/gpt-5.2-codex          (second subscription)
  3. glm/glm-4.7               (cheap, resets daily)
  4. minimax/MiniMax-M2.1      (cheapest, 5h reset)
  5. if/kimi-k2-thinking       (free unlimited)

Result: 5 layers of fallback = zero downtime

Case 4: "I want FREE AI in OpenClaw"

Problem: Need AI assistant in messaging apps, completely free

Combo: "openclaw-free"
  1. if/glm-4.7                (unlimited free)
  2. if/minimax-m2.1           (unlimited free)
  3. if/kimi-k2-thinking       (unlimited free)

Monthly cost: $0
Access via: WhatsApp, Telegram, Slack, Discord, iMessage, Signal...

💡 Key Features

🧠 Core Routing & Intelligence

Feature What It Does
🎯 Smart 4-Tier Fallback Auto-route: Subscription → API Key → Cheap → Free
📊 Real-Time Quota Tracking Live token count + reset countdown per provider
🔄 Format Translation OpenAI ↔ Claude ↔ Gemini ↔ Cursor ↔ Kiro seamless + response sanitization
👥 Multi-Account Support Multiple accounts per provider with intelligent selection
🔄 Auto Token Refresh OAuth tokens refresh automatically with retry
🎨 Custom Combos 6 strategies: fill-first, round-robin, p2c, random, least-used, cost-optimized
🧩 Custom Models Add any model ID to any provider
🌐 Wildcard Router Route provider/* patterns to any provider dynamically
🧠 Thinking Budget Passthrough, auto, custom, and adaptive modes for reasoning models
💬 System Prompt Injection Global system prompt applied across all requests
📄 Responses API Full OpenAI Responses API (/v1/responses) support for Codex

🎵 Multi-Modal APIs

Feature What It Does
🖼️ Image Generation /v1/images/generations — 4 providers, 9+ models
📐 Embeddings /v1/embeddings — 6 providers, 9+ models
🎤 Audio Transcription /v1/audio/transcriptions — Whisper-compatible
🔊 Text-to-Speech /v1/audio/speech — Multi-provider audio synthesis
🛡️ Moderations /v1/moderations — Content safety checks
🔀 Reranking /v1/rerank — Document relevance reranking

🛡️ Resilience & Security

Feature What It Does
🔌 Circuit Breaker Auto-open/close per-provider with configurable thresholds
🛡️ Anti-Thundering Herd Mutex + semaphore rate-limit for API key providers
🧠 Semantic Cache Two-tier cache (signature + semantic) reduces cost & latency
Request Idempotency 5s dedup window for duplicate requests
🔒 TLS Fingerprint Spoofing Bypass TLS-based bot detection via wreq-js
🌐 IP Filtering Allowlist/blocklist for API access control
📊 Editable Rate Limits Configurable RPM, min gap, and max concurrent at system level
🛡 API Endpoint Protection Auth gating + provider blocking for the /models endpoint

📊 Observability & Analytics

Feature What It Does
📝 Request Logging Debug mode with full request/response logs
💾 SQLite Proxy Logs Persistent proxy logs survive server restarts
📊 Analytics Dashboard Recharts-powered: stat cards, model usage chart, provider table
📈 Progress Tracking Opt-in SSE progress events for streaming
🧪 LLM Evaluations Golden set testing with 4 match strategies
🔍 Request Telemetry p50/p95/p99 latency aggregation + X-Request-Id tracing
📋 Logs Dashboard Unified 4-tab page: Request Logs, Proxy Logs, Audit Logs, Console
🖥️ Console Log Viewer Real-time terminal-style viewer with level filter, search, auto-scroll
📑 File-Based Logging Console interceptor captures all output to JSON log file with rotation
🏥 Health Dashboard System uptime, circuit breaker states, lockouts, cache stats
💰 Cost Tracking Budget management + per-model pricing configuration

☁️ Deployment & Sync

Feature What It Does
💾 Cloud Sync Sync config across devices via Cloudflare Workers
🌐 Deploy Anywhere Localhost, VPS, Docker, Cloudflare Workers
🔑 API Key Management Generate, rotate, and scope API keys per provider
🧙 Onboarding Wizard 4-step guided setup for first-time users
🔧 CLI Tools Dashboard One-click configure Claude, Codex, Cline, OpenClaw, Kilo, Antigravity
🔄 DB Backups Automatic backup, restore, export & import for all settings
📖 Feature Details

🎯 Smart 4-Tier Fallback

Create combos with automatic fallback:

Combo: "my-coding-stack"
  1. cc/claude-opus-4-6        (your subscription)
  2. nvidia/llama-3.3-70b      (free NVIDIA API)
  3. glm/glm-4.7               (cheap backup, $0.6/1M)
  4. if/kimi-k2-thinking       (free fallback)

→ Auto switches when quota runs out or errors occur

📊 Real-Time Quota Tracking

  • Token consumption per provider
  • Reset countdown (5-hour, daily, weekly)
  • Cost estimation for paid tiers
  • Monthly spending reports

🔄 Format Translation

Seamless translation between formats:

  • OpenAIClaudeGeminiOpenAI Responses
  • Your CLI tool sends OpenAI format → OmniRoute translates → Provider receives native format
  • Works with any tool that supports custom OpenAI endpoints
  • Response sanitization — Strips non-standard fields for strict OpenAI SDK compatibility
  • Role normalizationdevelopersystem for non-OpenAI; systemuser for GLM/ERNIE models
  • Think tag extraction<think> blocks → reasoning_content for thinking models
  • Structured outputjson_schema → Gemini's responseMimeType/responseSchema

👥 Multi-Account Support

  • Add multiple accounts per provider
  • Auto round-robin or priority-based routing
  • Fallback to next account when one hits quota

🔄 Auto Token Refresh

  • OAuth tokens automatically refresh before expiration
  • No manual re-authentication needed
  • Seamless experience across all providers

🎨 Custom Combos

  • Create unlimited model combinations
  • 6 strategies: fill-first, round-robin, power-of-two-choices, random, least-used, cost-optimized
  • Share combos across devices with Cloud Sync

🏥 Health Dashboard

  • System status (uptime, version, memory usage)
  • Circuit breaker states per provider (Closed/Open/Half-Open)
  • Rate limit status and active lockouts
  • Signature cache statistics
  • Latency telemetry (p50/p95/p99) + prompt cache
  • Reset health status with one click

🔧 Translator Playground

OmniRoute includes a powerful built-in Translator Playground with 4 modes for debugging, testing, and monitoring API translations:

Mode Description
💻 Playground Direct format translation — paste any API request body and instantly see how OmniRoute translates it between provider formats (OpenAI ↔ Claude ↔ Gemini ↔ Responses API). Includes example templates and format auto-detection.
💬 Chat Tester Send real chat requests through OmniRoute and see the full round-trip: your input, the translated request, the provider response, and the translated response back. Invaluable for validating combo routing.
🧪 Test Bench Batch testing mode — define multiple test cases with different inputs and expected outputs, run them all at once, and compare results across providers and models.
📱 Live Monitor Real-time request monitoring — watch incoming requests as they flow through OmniRoute, see format translations happening live, and identify issues instantly.

Access: Dashboard → Translator (sidebar)

💾 Cloud Sync

  • Sync providers, combos, and settings across devices
  • Automatic background sync
  • Secure encrypted storage

📖 Setup Guide

💳 Subscription Providers

Claude Code (Pro/Max)

Dashboard → Providers → Connect Claude Code
→ OAuth login → Auto token refresh
→ 5-hour + weekly quota tracking

Models:
  cc/claude-opus-4-6
  cc/claude-sonnet-4-5-20250929
  cc/claude-haiku-4-5-20251001

Pro Tip: Use Opus for complex tasks, Sonnet for speed. OmniRoute tracks quota per model!

OpenAI Codex (Plus/Pro)

Dashboard → Providers → Connect Codex
→ OAuth login (port 1455)
→ 5-hour + weekly reset

Models:
  cx/gpt-5.2-codex
  cx/gpt-5.1-codex-max

Gemini CLI (FREE 180K/month!)

Dashboard → Providers → Connect Gemini CLI
→ Google OAuth
→ 180K completions/month + 1K/day

Models:
  gc/gemini-3-flash-preview
  gc/gemini-2.5-pro

Best Value: Huge free tier! Use this before paid tiers.

GitHub Copilot

Dashboard → Providers → Connect GitHub
→ OAuth via GitHub
→ Monthly reset (1st of month)

Models:
  gh/gpt-5
  gh/claude-4.5-sonnet
  gh/gemini-3-pro
🔑 API Key Providers

NVIDIA NIM (FREE 1000 credits!)

  1. Sign up: build.nvidia.com
  2. Get free API key (1000 inference credits included)
  3. Dashboard → Add Provider → NVIDIA NIM:
    • API Key: nvapi-your-key

Models: nvidia/llama-3.3-70b-instruct, nvidia/mistral-7b-instruct, and 50+ more

Pro Tip: OpenAI-compatible API — works seamlessly with OmniRoute's format translation!

DeepSeek

  1. Sign up: platform.deepseek.com
  2. Get API key
  3. Dashboard → Add Provider → DeepSeek

Models: deepseek/deepseek-chat, deepseek/deepseek-coder

Groq (Free Tier Available!)

  1. Sign up: console.groq.com
  2. Get API key (free tier included)
  3. Dashboard → Add Provider → Groq

Models: groq/llama-3.3-70b, groq/mixtral-8x7b

Pro Tip: Ultra-fast inference — best for real-time coding!

OpenRouter (100+ Models)

  1. Sign up: openrouter.ai
  2. Get API key
  3. Dashboard → Add Provider → OpenRouter

Models: Access 100+ models from all major providers through a single API key.

💰 Cheap Providers (Backup)

GLM-4.7 (Daily reset, $0.6/1M)

  1. Sign up: Zhipu AI
  2. Get API key from Coding Plan
  3. Dashboard → Add API Key:
    • Provider: glm
    • API Key: your-key

Use: glm/glm-4.7

Pro Tip: Coding Plan offers 3× quota at 1/7 cost! Reset daily 10:00 AM.

MiniMax M2.1 (5h reset, $0.20/1M)

  1. Sign up: MiniMax
  2. Get API key
  3. Dashboard → Add API Key

Use: minimax/MiniMax-M2.1

Pro Tip: Cheapest option for long context (1M tokens)!

Kimi K2 ($9/month flat)

  1. Subscribe: Moonshot AI
  2. Get API key
  3. Dashboard → Add API Key

Use: kimi/kimi-latest

Pro Tip: Fixed $9/month for 10M tokens = $0.90/1M effective cost!

🆓 FREE Providers (Emergency Backup)

iFlow (8 FREE models)

Dashboard → Connect iFlow
→ iFlow OAuth login
→ Unlimited usage

Models:
  if/kimi-k2-thinking
  if/qwen3-coder-plus
  if/glm-4.7
  if/minimax-m2
  if/deepseek-r1

Qwen (3 FREE models)

Dashboard → Connect Qwen
→ Device code authorization
→ Unlimited usage

Models:
  qw/qwen3-coder-plus
  qw/qwen3-coder-flash

Kiro (Claude FREE)

Dashboard → Connect Kiro
→ AWS Builder ID or Google/GitHub
→ Unlimited usage

Models:
  kr/claude-sonnet-4.5
  kr/claude-haiku-4.5
🎨 Create Combos

Example 1: Maximize Subscription → Cheap Backup

Dashboard → Combos → Create New

Name: premium-coding
Models:
  1. cc/claude-opus-4-6 (Subscription primary)
  2. glm/glm-4.7 (Cheap backup, $0.6/1M)
  3. minimax/MiniMax-M2.1 (Cheapest fallback, $0.20/1M)

Use in CLI: premium-coding

Example 2: Free-Only (Zero Cost)

Name: free-combo
Models:
  1. gc/gemini-3-flash-preview (180K free/month)
  2. if/kimi-k2-thinking (unlimited)
  3. qw/qwen3-coder-plus (unlimited)

Cost: $0 forever!
🔧 CLI Integration

Cursor IDE

Settings → Models → Advanced:
  OpenAI API Base URL: http://localhost:20128/v1
  OpenAI API Key: [from OmniRoute dashboard]
  Model: cc/claude-opus-4-6

Claude Code

Use the CLI Tools page in the dashboard for one-click configuration, or edit ~/.claude/settings.json manually.

Codex CLI

export OPENAI_BASE_URL="http://localhost:20128"
export OPENAI_API_KEY="your-omniroute-api-key"

codex "your prompt"

OpenClaw

Option 1 — Dashboard (recommended):

Dashboard → CLI Tools → OpenClaw → Select Model → Apply

Option 2 — Manual: Edit ~/.openclaw/openclaw.json:

{
  "models": {
    "providers": {
      "omniroute": {
        "baseUrl": "http://127.0.0.1:20128/v1",
        "apiKey": "sk_omniroute",
        "api": "openai-completions"
      }
    }
  }
}

Note: OpenClaw only works with local OmniRoute. Use 127.0.0.1 instead of localhost to avoid IPv6 resolution issues.

Cline / Continue / RooCode

Settings → API Configuration:
  Provider: OpenAI Compatible
  Base URL: http://localhost:20128/v1
  API Key: [from OmniRoute dashboard]
  Model: if/kimi-k2-thinking

📊 Available Models

View all available models

Claude Code (cc/) - Pro/Max:

  • cc/claude-opus-4-6
  • cc/claude-sonnet-4-5-20250929
  • cc/claude-haiku-4-5-20251001

Codex (cx/) - Plus/Pro:

  • cx/gpt-5.2-codex
  • cx/gpt-5.1-codex-max

Gemini CLI (gc/) - FREE:

  • gc/gemini-3-flash-preview
  • gc/gemini-2.5-pro

GitHub Copilot (gh/):

  • gh/gpt-5
  • gh/claude-4.5-sonnet

NVIDIA NIM (nvidia/) - FREE credits:

  • nvidia/llama-3.3-70b-instruct
  • nvidia/mistral-7b-instruct
  • 50+ more models on build.nvidia.com

GLM (glm/) - $0.6/1M:

  • glm/glm-4.7

MiniMax (minimax/) - $0.2/1M:

  • minimax/MiniMax-M2.1

iFlow (if/) - FREE:

  • if/kimi-k2-thinking
  • if/qwen3-coder-plus
  • if/deepseek-r1
  • if/glm-4.7
  • if/minimax-m2

Qwen (qw/) - FREE:

  • qw/qwen3-coder-plus
  • qw/qwen3-coder-flash

Kiro (kr/) - FREE:

  • kr/claude-sonnet-4.5
  • kr/claude-haiku-4.5

OpenRouter (or/) - 100+ models:


🧪 Evaluations (Evals)

OmniRoute includes a built-in evaluation framework to test LLM response quality against a golden set. Access it via Analytics → Evals in the dashboard.

Built-in Golden Set

The pre-loaded "OmniRoute Golden Set" contains 10 test cases covering:

  • Greetings, math, geography, code generation
  • JSON format compliance, translation, markdown
  • Safety refusal (harmful content), counting, boolean logic

Evaluation Strategies

Strategy Description Example
exact Output must match exactly "4"
contains Output must contain substring (case-insensitive) "Paris"
regex Output must match regex pattern "1.*2.*3"
custom Custom JS function returns true/false (output) => output.length > 10

🔐 OAuth em Servidor Remoto (Remote OAuth Setup)

⚠️ IMPORTANTE para usuários com OmniRoute em VPS/Docker/servidor remoto

Por que o OAuth do Antigravity / Gemini CLI falha em servidores remotos?

Os provedores Antigravity e Gemini CLI usam Google OAuth 2.0 para autenticação. O Google exige que a redirect_uri usada no fluxo OAuth seja exatamente uma das URIs pré-cadastradas no Google Cloud Console do aplicativo.

As credenciais OAuth embutidas no OmniRoute estão cadastradas apenas para localhost. Quando você acessa o OmniRoute em um servidor remoto (ex: https://omniroute.meuservidor.com), o Google rejeita a autenticação com:

Error 400: redirect_uri_mismatch

Solução: Configure suas próprias credenciais OAuth

Você precisa criar um OAuth 2.0 Client ID no Google Cloud Console com a URI do seu servidor.

Passo a passo

1. Acesse o Google Cloud Console

Abra: https://console.cloud.google.com/apis/credentials

2. Crie um novo OAuth 2.0 Client ID

  • Clique em "+ Create Credentials""OAuth client ID"
  • Tipo de aplicativo: "Web application"
  • Nome: escolha qualquer nome (ex: OmniRoute Remote)

3. Adicione as Authorized Redirect URIs

No campo "Authorized redirect URIs", adicione:

https://seu-servidor.com/callback

Substitua seu-servidor.com pelo domínio ou IP do seu servidor (inclua a porta se necessário, ex: http://45.33.32.156:20128/callback).

4. Salve e copie as credenciais

Após criar, o Google mostrará o Client ID e o Client Secret.

5. Configure as variáveis de ambiente

No seu .env (ou nas variáveis de ambiente do Docker):

# Para Antigravity:
ANTIGRAVITY_OAUTH_CLIENT_ID=seu-client-id.apps.googleusercontent.com
ANTIGRAVITY_OAUTH_CLIENT_SECRET=GOCSPX-seu-secret

# Para Gemini CLI:
GEMINI_OAUTH_CLIENT_ID=seu-client-id.apps.googleusercontent.com
GEMINI_OAUTH_CLIENT_SECRET=GOCSPX-seu-secret
GEMINI_CLI_OAUTH_CLIENT_SECRET=GOCSPX-seu-secret

6. Reinicie o OmniRoute

# Se usando npm:
npm run dev

# Se usando Docker:
docker restart omniroute

7. Tente conectar novamente

Dashboard → Providers → Antigravity (ou Gemini CLI) → OAuth

Agora o Google redirecionará corretamente para https://seu-servidor.com/callback e a autenticação funcionará.


Workaround temporário (sem configurar credenciais próprias)

Se não quiser criar credenciais próprias agora, ainda é possível usar o fluxo manual de URL:

  1. O OmniRoute abrirá a URL de autorização do Google
  2. Após você autorizar, o Google tentará redirecionar para localhost (que falha no servidor remoto)
  3. Copie a URL completa da barra de endereço do seu browser (mesmo que a página não carregue)
  4. Cole essa URL no campo que aparece no modal de conexão do OmniRoute
  5. Clique em "Connect"

Este workaround funciona porque o código de autorização na URL é válido independente do redirect ter carregado ou não.


🐛 Troubleshooting

Click to expand troubleshooting guide

"Language model did not provide messages"

  • Provider quota exhausted → Check dashboard quota tracker
  • Solution: Use combo fallback or switch to cheaper tier

Rate limiting

  • Subscription quota out → Fallback to GLM/MiniMax
  • Add combo: cc/claude-opus-4-6 → glm/glm-4.7 → if/kimi-k2-thinking

OAuth token expired

  • Auto-refreshed by OmniRoute
  • If issues persist: Dashboard → Provider → Reconnect

High costs

  • Check usage stats in Dashboard → Costs
  • Switch primary model to GLM/MiniMax
  • Use free tier (Gemini CLI, iFlow) for non-critical tasks

Dashboard opens on wrong port

  • Set PORT=20128 and NEXT_PUBLIC_BASE_URL=http://localhost:20128

Cloud sync errors

  • Verify BASE_URL points to your running instance
  • Verify CLOUD_URL points to your expected cloud endpoint
  • Keep NEXT_PUBLIC_* values aligned with server-side values

First login not working

  • Check INITIAL_PASSWORD in .env
  • If unset, fallback password is 123456

No request logs

  • Set ENABLE_REQUEST_LOGS=true in .env

Connection test shows "Invalid" for OpenAI-compatible providers

  • Many providers don't expose a /models endpoint
  • OmniRoute v1.0.6+ includes fallback validation via chat completions
  • Ensure base URL includes /v1 suffix

🛠️ Tech Stack

  • Runtime: Node.js 18–22 LTS (⚠️ Node.js 24+ is not supportedbetter-sqlite3 native binaries are incompatible)
  • Language: TypeScript 5.9 — 100% TypeScript across src/ and open-sse/ (v1.0.6)
  • Framework: Next.js 16 + React 19 + Tailwind CSS 4
  • Database: LowDB (JSON) + SQLite (domain state + proxy logs)
  • Streaming: Server-Sent Events (SSE)
  • Auth: OAuth 2.0 (PKCE) + JWT + API Keys
  • Testing: Node.js test runner (368+ unit tests)
  • CI/CD: GitHub Actions (auto npm publish + Docker Hub on release)
  • Website: omniroute.online
  • Package: npmjs.com/package/omniroute
  • Docker: hub.docker.com/r/diegosouzapw/omniroute
  • Resilience: Circuit breaker, exponential backoff, anti-thundering herd, TLS spoofing

📖 Documentation

Document Description
User Guide Providers, combos, CLI integration, deployment
API Reference All endpoints with examples
Troubleshooting Common problems and solutions
Architecture System architecture and internals
Contributing Development setup and guidelines
OpenAPI Spec OpenAPI 3.0 specification
Security Policy Vulnerability reporting and security practices
VM Deployment Complete guide: VM + nginx + Cloudflare setup
Features Gallery Visual dashboard tour with screenshots

📸 Dashboard Preview

Click to see dashboard screenshots
Page Screenshot
Providers Providers
Combos Combos
Analytics Analytics
Health Health
Translator Translator
Settings Settings
CLI Tools CLI Tools
Usage Logs Usage
Endpoint Endpoint

🗺️ Roadmap

OmniRoute has 210+ features planned across multiple development phases. Here are the key areas:

Category Planned Features Highlights
🧠 Routing & Intelligence 25+ Lowest-latency routing, tag-based routing, quota preflight, P2C account selection
🔒 Security & Compliance 20+ SSRF hardening, credential cloaking, rate-limit per endpoint, management key scoping
📊 Observability 15+ OpenTelemetry integration, real-time quota monitoring, cost tracking per model
🔄 Provider Integrations 20+ Dynamic model registry, provider cooldowns, multi-account Codex, Copilot quota parsing
Performance 15+ Dual cache layer, prompt cache, response cache, streaming keepalive, batch API
🌐 Ecosystem 10+ WebSocket API, config hot-reload, distributed config store, commercial mode

🔜 Coming Soon

  • 🔗 OpenCode Integration — Native provider support for the OpenCode AI coding IDE
  • 🔗 TRAE Integration — Full support for the TRAE AI development framework
  • 📦 Batch API — Asynchronous batch processing for bulk requests
  • 🎯 Tag-Based Routing — Route requests based on custom tags and metadata
  • 💰 Lowest-Cost Strategy — Automatically select the cheapest available provider

📝 Full feature specifications available in docs/new-features/ (217 detailed specs)


📧 Support

💬 Join our community! WhatsApp Group — Get help, share tips, and stay updated.


👥 Contributors

Contributors

How to Contribute

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

See CONTRIBUTING.md for detailed guidelines.

Releasing a New Version

# Create a release — npm publish happens automatically
gh release create v1.0.6 --title "v1.0.6" --generate-notes

📊 Star History

Star History Chart

🙏 Acknowledgments

Special thanks to 9router by decolua — the original project that inspired this fork. OmniRoute builds upon that incredible foundation with additional features, multi-modal APIs, and a full TypeScript rewrite.

Special thanks to CLIProxyAPI — the original Go implementation that inspired this JavaScript port.


📄 License

MIT License - see LICENSE for details.


Built with ❤️ for developers who code 24/7
omniroute.online

About

OmniRoute is an AI gateway for multi-provider LLMs: an OpenAI-compatible endpoint with smart routing, load balancing, retries, and fallbacks. Add policies, rate limits, caching, and observability for reliable, cost-aware inference.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published