Route LLM requests to the best model for the task at hand.
-
Updated
Sep 22, 2025 - Jupyter Notebook
Route LLM requests to the best model for the task at hand.
Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and local-first deployments.
Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.
Routes to the most performant and cost efficient LLM based on your prompt [ 🚧 WIP ]
Securing markets of dLLMs (Decentralized LLMs) utilizing cryptographic technologies.
A lightweight AI model router for seamlessly switching between multiple AI providers (OpenAI, Anthropic, Google AI) with unified API interface.
Successfully developed an LLM application that provides AI-powered, structured insights based on user queries. The app features a dynamic response generator with progress indicators, interactive upvote/downvote options, and a clean, engaging user interface built using Streamlit. Ideal for personalized meal, fitness, and health-related advice.
A dynamic input-based LLM routing AI-agent built with n8n. It selects the most suitable language model based on user queries, and uses the selected model to answer, solve, or address the input accordingly.
Hybrid LLM router: local+cloud models with meta-routing,memory, FastAPI UI, and FAISS; integrates Ollama and OpenAI.
Add a description, image, and links to the llm-router topic page so that developers can more easily learn about it.
To associate your repository with the llm-router topic, visit your repo's landing page and select "manage topics."