A complete starter template for building AI agents with LangGraph, featuring MCP integration, chat UI, and LangSmith observability.
- π€ LangGraph ReAct Agent - Ready-to-use agent with hardcoded prompts
- π MCP Integration - Model Context Protocol server support (filesystem included)
- π¬ Agent Chat UI - Next.js-based web interface for agent interaction
- π LangSmith Tracing - Built-in observability and monitoring
- π³ Docker Ready - Containerized development with hot reload
- β‘ Fast Setup - One command to start everything
git clone <your-repo>
cd <your-repo>
# Create environment file
cp .env.example .env
# Edit .env and add your API keys:
# ANTHROPIC_API_KEY=your_anthropic_api_key_here
# LANGSMITH_API_KEY=your_langsmith_api_key_here# Start both agent and chat UI
docker compose up -d
# View logs
docker compose logs -f
# Stop services
docker compose downAccess the chat UI at: http://localhost:40004 π
-
MCP Configuration Options
The system uses
agent/mcp_integration/servers.jsonby default. To customize without affecting the template:# Create your own servers.json at project root (gitignored) cp agent/mcp_integration/servers.json servers.json # Edit servers.json with your configuration
Configuration priority:
servers.jsonat project root (if exists) - your custom configagent/mcp_integration/servers.json- default template config
-
Edit MCP Configuration
{ "servers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "."], "transport": "stdio" }, "your_server": { "command": "your-command", "args": ["your", "args"], "transport": "stdio" } } } -
Add Environment Variables (if needed)
# .env YOUR_SERVER_TOKEN=your_token_here
- Agent Behavior - Edit
agent/prompts.py - Agent State - Modify
agent/state.py - Agent Logic - Update
agent/graph.py
The chat UI lives in agent-chat-ui/. For development and builds, use the provided containers.
The template uses direct tool execution for simplicity. If you need approval gates for write operations:
- Create an approval wrapper class in
agent/graph.py - Wrap tools during initialization based on operation type
- Use LangGraph interrupts to pause execution for approval
- Add approval handling in your UI or CLI
Example approval wrapper:
from langgraph.types import interrupt
class ApprovalTool(BaseTool):
def _run(self, **kwargs):
if self._is_write_operation():
approval = interrupt({"type": "approval", "tool": self.name})
if not approval.get("approved"):
return "Operation cancelled"
return self.wrapped_tool.run(**kwargs)/
βββ agent/ # Core agent implementation
β βββ graph.py # LangGraph agent definition
β βββ prompts.py # System prompts (hardcoded)
β βββ config.py # Agent configuration options
β βββ mcp_integration/ # MCP server configuration
βββ agent-chat-ui/ # Next.js chat interface
β βββ Dockerfile # Chat UI container
β βββ .env # Pre-configured for localhost
βββ infra/ # Infrastructure (LangSmith, etc.)
βββ Dockerfile # Agent container
βββ docker-compose.yml # Development with hot reload (default)
βββ docker-compose.prod.yml # Production Docker setup
βββ langgraph.json # LangGraph deployment config
docker compose exec agent black . && \
docker compose exec agent ruff check . && \
docker compose exec agent mypy .- Create a YAML dataset (see sample at
infra/langsmith/examples/sample_dataset.yaml). - Run the evaluation inside the agent container:
docker compose exec agent python scripts/run_evaluation.py \
--dataset-file infra/langsmith/examples/sample_dataset.yaml \
--jsonYAML schema:
dataset:
name: my-eval-dataset # required
description: Optional description
judge_model: anthropic:claude-3-5-sonnet-latest
examples:
- inputs:
question: "What is 2 + 2?"
outputs:
answer: "4"This template includes LangSmith integration for:
- Tracing - Every agent run is automatically traced
- Datasets - Manage test cases and evaluations
- Monitoring - Track performance and costs
View your traces at: https://smith.langchain.com
# Deploy to LangGraph Cloud
langgraph deploy
# Or use the included configuration
langgraph deploy --config langgraph.json# Production build and run
docker compose -f docker-compose.prod.yml up -d
# Scale services
docker compose -f docker-compose.prod.yml up -d --scale agent=3
# Build agent image
docker build -t my-agent .
# Build chat UI image
docker build -t my-chat-ui ./agent-chat-ui
# Run with custom configuration
docker run -p 40003:40003 --env-file .env my-agent
docker run -p 40004:40004 my-chat-ui- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
MIT License - see LICENSE file for details.
- π Documentation: Check the
CLAUDE.mdfile for development context - π Issues: Report bugs via GitHub issues
- π¬ Discussions: Use GitHub discussions for questions