A production-ready, scalable multi-agent system built with LangGraph Platform, featuring specialized agents, multi-provider LLM support, and complete LangGraph Studio compatibility.
- Orchestrator Agent: Intelligent routing to specialized sub-agents
- Multipurpose Bot: Math calculations, chitchat, and headphones expertise
- Hungry Services: Food search, recipes, and restaurant recommendations
- Embedding Service: Vector storage and knowledge management
- OpenAI (GPT-4.1, GPT-5)
- Anthropic (Claude 4)
- Google (Gemini)
- Ollama (Local models)
- Groq, Cohere, Mistral
- LangGraph Studio compatibility
- Persistent checkpointing (Memory, SQLite, PostgreSQL)
- Vector storage (Chroma, Pinecone, Weaviate)
- Streaming support
- Human-in-the-loop capabilities
- Time-travel debugging
- Docker deployment ready
- Python 3.11+
- LangSmith API key (free at smith.langchain.com)
- At least one LLM provider API key
# Clone the repository
git clone https://github.com/shamspias/langgraph-agent-system.git
cd langgraph-agent-system
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -e .
# Copy example environment
cp .env.example .env
# Edit .env with your API keys
nano .env # or use your preferred editor
Required settings:
LANGSMITH_API_KEY=your-langsmith-key-here
PRIMARY_PROVIDER=openai # or anthropic, ollama, etc.
OPENAI_API_KEY=your-openai-key-here # or other provider key
# Run with LangGraph Studio (recommended)
langgraph dev --tunnel
# This will output:
# Ready!
# - API: http://localhost:2024
# - Docs: http://localhost:2024/docs
# - LangGraph Studio Web UI: https://smith.langchain.com/studio/?baseUrl=...
Click the Studio URL to open the visual interface!
langgraph dev --tunnel
Then open the provided Studio URL in your browser.
python -m src.main cli
Example session:
[orchestrator]> What's 234 * 567?
[Multipurpose Response]
Let me calculate that for you:
Expression: 234*567
Result: **132678**
[orchestrator]> /switch hungry
✓ Switched to hungry agent
[hungry]> Find me Italian recipes
[Hungry Response]
Here are some delicious Italian recipes...
python -m src.main api
# Or with uvicorn
uvicorn src.api.server:create_app --reload --host 0.0.0.0 --port 8000
API endpoints:
POST /chat
- Chat with agentsPOST /embed
- Embed contentPOST /search
- Search vector storeGET /health
- Health checkGET /agents
- List available agents
# Build and run with Docker Compose
docker-compose up -d
# Run specific services
docker-compose up -d langgraph-app postgres redis
# With Ollama for local models
docker-compose --profile ollama up -d
┌─────────────────────┐
│ LangGraph Studio │
│ (Visual UI) │
└──────────┬──────────┘
│
┌──────────▼──────────┐
│ Orchestrator │
│ (Main Router) │
└──────────┬──────────┘
│
┌──────┴──────┬──────────┬────────────┐
│ │ │ │
┌───▼──┐ ┌────▼───┐ ┌───▼───┐ ┌─────▼────┐
│Multi │ │ Hungry │ │Embed │ │ General │
│purpose│ │Services│ │Service│ │Assistant │
└───┬──┘ └────────┘ └───────┘ └──────────┘
│
┌───┴───┬─────────┬──────────┐
│ Math │Chitchat │Headphones│
│Agent │ Agent │ Agent │
└───────┴─────────┴──────────┘
Configure your preferred provider in .env
:
PRIMARY_PROVIDER=openai
OPENAI_API_KEY=sk-...
DEFAULT_MODEL=gpt-4o-mini
PRIMARY_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
DEFAULT_MODEL=claude-3-5-sonnet-latest
PRIMARY_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2
MEMORY_TYPE=postgres
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=langgraph_memory
POSTGRES_USER=postgres
POSTGRES_PASSWORD=secure-password
MEMORY_TYPE=sqlite
MEMORY_DB_PATH=./data/memory.db
VECTOR_STORE_TYPE=chroma
VECTOR_DB_PATH=./data/vector_store
CHUNK_SIZE=500
CHUNK_OVERLAP=50
import httpx
async with httpx.AsyncClient() as client:
# Chat with orchestrator
response = await client.post(
"http://localhost:8000/chat",
json={
"message": "What's the best Italian restaurant nearby?",
"agent": "orchestrator"
}
)
print(response.json())
# Embed content
response = await client.post(
"http://localhost:8000/embed",
json={
"content": "The Sony WH-1000XM5 are premium headphones...",
"collection_name": "headphones_knowledge"
}
)
// Chat with agent
const response = await fetch('http://localhost:8000/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message: 'Calculate 15% tip on $85',
agent: 'multipurpose'
})
});
const data = await response.json();
console.log(data.response);
# Chat request
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{"message": "Find me a pasta recipe", "agent": "hungry"}'
# Embed content
curl -X POST http://localhost:8000/embed \
-H "Content-Type: application/json" \
-d '{"content": "https://example.com/article", "collection_name": "articles"}'
# Test all graphs
python -m src.main test
# Run unit tests
pytest tests/
# Test with coverage
pytest --cov=src tests/
- Visual Graph Inspection: See your agent's decision flow in real-time
- Time Travel: Step through past executions and modify state
- State Inspection: View complete state at any point
- Tool Visualization: See tool calls and results
- Human-in-the-Loop: Pause and modify execution
LANGSMITH_TRACING=true
LANGSMITH_PROJECT=my-agent-project
View traces at smith.langchain.com
from src.utils.logging import get_logger
logger = get_logger("my_module", user_id="123")
logger.info("Processing request", agent="multipurpose", action="calculate")
# Deploy to LangGraph Platform
# 1. Push to GitHub
# 2. Connect repo in LangSmith Deployments
# 3. Click Deploy
# Production build
docker build -t langgraph-agent-system:prod .
# Run with environment
docker run -d \
--name langgraph-prod \
-p 8000:8000 \
-p 2024:2024 \
--env-file .env.prod \
langgraph-agent-system:prod
apiVersion: apps/v1
kind: Deployment
metadata:
name: langgraph-agents
spec:
replicas: 3
selector:
matchLabels:
app: langgraph
template:
metadata:
labels:
app: langgraph
spec:
containers:
- name: agent-system
image: langgraph-agent-system:prod
ports:
- containerPort: 8000
- containerPort: 2024
envFrom:
- configMapRef:
name: langgraph-config
- Create agent graph in
src/graphs/
:
# src/graphs/custom_agent.py
from langgraph.graph import StateGraph, START, END
from src.core.providers import get_model, get_checkpointer
def build_custom_graph() -> StateGraph:
workflow = StateGraph(YourState)
# Add nodes and edges
return workflow
# Required for langgraph.json
graph = build_custom_graph().compile(checkpointer=get_checkpointer())
- Register in
langgraph.json
:
{
"graphs": {
"custom": "./src/graphs/custom_agent:graph"
}
}
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open a Pull Request
MIT License - see LICENSE file for details.
- Built with LangGraph
- Powered by LangChain
- UI by LangGraph Studio
- Issues: GitHub Issues