The agent-native LLM router for OpenClaw. 41+ models, <1ms routing, USDC payments on Base & Solana via x402.
-
Updated
Mar 17, 2026 - TypeScript
The agent-native LLM router for OpenClaw. 41+ models, <1ms routing, USDC payments on Base & Solana via x402.
Smart LLM Routing for OpenClaw. Cut Costs up to 70% 🦞🦚
Open-source LLM router & AI cost optimizer. Routes simple prompts to cheap/local models, complex ones to premium — automatically. Drop-in OpenAI-compatible proxy for Claude Code, Codex, Cursor, OpenClaw. Saves 40-70% on AI API costs. Self-hosted, no middleman.
Route LLM requests to the best model for the task at hand.
High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model discovery across local and remote inference backends.
Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.
Route inference across LLM providers. Track cost per request.
RouterArena: An open framework for evaluating LLM routers with standardized datasets, metrics, an automated framework, and a live leaderboard.
SmarterRouter: An intelligent LLM gateway and VRAM-aware router for Ollama, llama.cpp, and OpenAI. Features semantic caching, model profiling, and automatic failover for local AI labs.
LLM orchestration toolkit for agent workflows: planner + workers + synthesis, optional router (LLM + learned fallback), supports OpenAI/Anthropic/Ollama/llama.cpp, real scraping with caching, MCP server integration, and a TUI chat UI.
Claude Code hooks that auto-switch model tier based on task complexity
IYKYK
Free, self-hosted AI model router. OpenRouter / ClawRouter alternative using your own API keys. 14-dimension classifier routes to the right model (Anthropic/OpenAI/Kimi) automatically. No middleman, no markup. Built for OpenClaw.
Local LLM proxy, DevOps friendly
Open-source pipeline for generating and augmenting Arch-Router-style conversational routing datasets.
Stop being locked into one LLM provider. UnifyRoute is a self-hosted gateway that routes, fails over, and manages quotas across OpenAI, Anthropic, and more — with a drop-in OpenAI-compatible API.
Talu is a single-binary, local-first LLM runtime with a Zig core and multi-language bindings — CLI, Python API, HTTP server, plugin-extensible Web UI, structured output, quantization, embeddings, and unified local/remote model routing.
Nimbus is a zero-dependency local proxy that routes your IDE's AI requests to inference by providing a unified OpenAI-compatible endpoint. Supports multiple providers via the BYOK model.
Open-LLM Router is an OpenAI-compatible API gateway that intelligently routes LLM requests across multiple configured providers and models with features like auto-routing, failover, logging, and metrics.
LLM Router is a service that can be deployed on‑premises or in the cloud. It adds a layer between any application and the LLM provider. In real time it controls traffic, distributes a load among providers of a specific LLM, and enables analysis of outgoing requests from a security perspective (masking, anonymization, prohibited content).
Add a description, image, and links to the llm-router topic page so that developers can more easily learn about it.
To associate your repository with the llm-router topic, visit your repo's landing page and select "manage topics."