A REST API for a conversational AI virtual agent that can answer questions about movies using the MovieLens dataset. The application uses a multi-agent LangGraph workflow to intelligently route, classify, and answer user queries about movies.
- Multi-Agent Architecture: Utilizes LangGraph with specialized agents for routing, intent classification, entity extraction, SQL generation, and weather queries
- Service Layer Architecture: Clean separation of business logic with dedicated
ChatServicelayer for session and conversation management - Natural Language Queries: Answer questions about movies and weather using natural language
- MovieLens Dataset: Works with the MovieLens 100k dataset containing movies, users, ratings, and genres
- MCP Server Integration: Extensible Model Context Protocol (MCP) server for weather data with both HTTP and stdio transport support
- Weather Agent: Dedicated agent for weather forecasts and alerts using the National Weather Service API
- Streamlit Web UI: Interactive web-based chat interface for seamless user interaction
- Conversational Context: Maintains conversation history for context-aware responses with persistent SQLite storage
- Asynchronous Processing: Fully async graph execution and message processing for improved performance
- RESTful API: FastAPI-based REST API with comprehensive endpoints
- Multiple LLM Providers: Support for Ollama (local), OpenAI, and Groq inference models
- Tool Calling: Automatically generates and executes SQL queries and weather API calls based on user intent
The application uses a service-oriented, multi-agent LangGraph architecture with the following components:
- Smart Router: Determines if the query is about movies, weather, or needs clarification
- Intent Extractor: Classifies user intent (recommendation, specific movie query, genre exploration, weather forecast, etc.)
- Entity Extractor: Extracts structured entities (movie titles, genres, years, ratings, locations) from queries
- Tool Calling Agent: Generates and executes SQL queries for movie data and responds to user queries
- Weather Agent: Processes weather-related queries using the MCP Server to fetch forecasts and alerts
- Error Handler: Handles errors gracefully throughout the workflow
- ChatService: Business logic layer that manages:
- Session creation and tracking
- Conversation history (persistent SQLite storage)
- Message processing and coordination with the agent graph
- Response generation
- Weather MCP Server: Model Context Protocol server providing:
- Weather forecast tool (using latitude/longitude)
- Weather alerts tool (using US state codes)
- Powered by the National Weather Service API
- Supports both HTTP and stdio transport protocols
- Python >= 3.13
- SQLite (included with Python)
- LLM Provider (choose one):
- Ollama (default): Run locally with tool-calling compatible models (e.g.,
qwen3:8b) - OpenAI: GPT-4, GPT-3.5-turbo, or other OpenAI models
- Groq: Fast inference with models like
llama-4-scout-17b-16e-instruct
- Ollama (default): Run locally with tool-calling compatible models (e.g.,
- Streamlit (optional): For the web-based chat UI
- MCP Server Dependencies (optional): For weather functionality via Model Context Protocol
uv is a fast Python package installer and resolver. If you don't have uv installed, you can install it with:
curl -LsSf https://astral.sh/uv/install.sh | shThen install the project dependencies:
# Install dependencies
uv pip install -r requirements.txt
# Or install the package in editable mode
uv pip install -e .If you prefer using pip, you can install the dependencies with:
# Install dependencies
pip install -r requirements.txt
# Or install the package in editable mode
pip install -e .If using Ollama (the default provider), install and set it up:
# Install Ollama (macOS/Linux)
curl -fsSL https://ollama.com/install.sh | sh
# Pull the model (default: qwen3:8b)
ollama pull qwen3:8bOr use any other Tool Calling compatible model from Ollama.
The application uses SQLite and automatically downloads the MovieLens 100k dataset on first run. To initialize the database:
# Run the data ingestion script
python -m convai.data.ingestThis will:
- Download the MovieLens 100k dataset
- Extract it to a temporary directory
- Load users, movies, genres, and ratings into the database
- Create
movielens.dbin the project root - And cleans up after the download, extraction and database creattion
Create a .env file in the project root to customize settings:
# API Configuration
HOST=0.0.0.0
PORT=8000
# Database Configuration
DATABASE_URL=sqlite:///./movielens.db
# LLM Configuration (Ollama - Default)
MODEL_PROVIDER=ollama
MODEL_NAME=qwen3:8b
MODEL_TEMPERATURE=0.0
# MCP Server Configuration (for Weather functionality)
MCP_SERVER=http://127.0.0.1:8001/mcp
# Logging Configuration
LOG_LEVEL=infoFor OpenAI:
MODEL_PROVIDER=openai
MODEL_NAME=gpt-4
API_KEY=your_openai_api_key_hereFor Groq (Fast Inference):
MODEL_PROVIDER=groq
MODEL_NAME=meta-llama/llama-4-scout-17b-16e-instruct
API_KEY=your_groq_api_key_hereTo enable weather functionality, start the MCP server:
# Start with HTTP transport (default port 8001)
uv run python mcp_server/weather_server.py --transport http
# Or start with stdio transport
uv run python mcp_server/weather_server.py --transport stdioThe MCP server provides:
- Weather Forecasts: Get detailed forecasts using latitude/longitude
- Weather Alerts: Check active weather alerts by US state code
Leave this running in a separate terminal if you want weather query support.
Start the FastAPI server for REST API access:
# Run using uv
uv run python convai/app.py
# Or run directly with Python
python -m convai.app
# Or use uvicorn directly
uvicorn convai.app:app --host 0.0.0.0 --port 8000The API will be available at http://localhost:8000.
Start the Streamlit web interface for an interactive chat experience:
# Run from project root
streamlit run convai/ui/streamlit_app.py
# Or specify a custom port
streamlit run convai/ui/streamlit_app.py --server.port 8502The web UI will open in your browser at http://localhost:8501.
Features:
- π Create new chat sessions with one click
- π¬ Manage multiple conversations
- π Switch between sessions seamlessly
- π View complete conversation history
- π¨ Modern, intuitive interface
For the complete experience with both API and UI:
# Terminal 1: Start MCP Weather Server (optional, for weather queries)
uv run python mcp_server/weather_server.py --transport http
# Terminal 2: Start FastAPI Server (if you want API access)
uv run python convai/app.py
# Terminal 3: Start Streamlit UI
streamlit run convai/ui/streamlit_app.pyOnce the server is running, you can access:
- Interactive API Docs (Swagger UI):
http://localhost:8000/docs - ReDoc Documentation:
http://localhost:8000/redoc - Health Check:
http://localhost:8000/health
curl -X POST http://localhost:8000/api/v1/chat/createResponse:
{
"session_id": "550e8400-e29b-41d4-a716-446655440000",
"created_at": "2024-01-15T10:30:00Z"
}curl -X POST http://localhost:8000/api/v1/chat/{session_id}/messages \
-H "Content-Type: application/json" \
-d '{
"message": "What are the top 5 rated action movies?"
}'Response:
{
"message_id": "660e8400-e29b-41d4-a716-446655440001",
"user_message": "What are the top 5 rated action movies?",
"assistant_response": "Here are the top 5 rated action movies:\n1. The Shawshank Redemption (1994) - 4.8\n2. The Godfather (1972) - 4.8\n...",
"timestamp": "2024-01-15T10:30:05Z"
}curl http://localhost:8000/api/v1/chat/{session_id}/messages?limit=10Movie Queries:
- "Show me action movies from the 1990s"
- "What are the highest rated comedies?"
- "Find movies similar to The Matrix"
- "What movies did user 1 rate highly?"
- "Compare the ratings of Pulp Fiction and Forrest Gump"
Weather Queries:
- "What's the weather forecast for San Francisco?" (uses lat/long: 37.7749, -122.4194)
- "Are there any weather alerts in California?" (state code: CA)
- "Show me the weather for New York City" (uses lat/long: 40.7128, -74.0060)
- "Get weather alerts for Texas" (state code: TX)
Create a new chat session.
Response: Session ID and creation timestamp
Send a message to an existing session.
Request Body:
{
"message": "Your question about movies"
}Response: Message ID, user message, assistant response, and timestamp
Retrieve message history for a session.
Query Parameters:
limit(optional): Number of messages to return (default: 10, max: 100)
Response: List of messages in the conversation
Health check endpoint.
Response: Service status and timestamp
convai/
βββ app.py # FastAPI application and API routes
βββ data/
β βββ database.py # Database configuration and session management
β βββ models.py # SQLAlchemy models (User, Movie, Genre, Rating)
β βββ schemas.py # Pydantic schemas for API requests/responses
β βββ ingest.py # Data ingestion from MovieLens dataset
βββ services/
β βββ chat.py # ChatService - business logic layer for session & conversation management
βββ graph/
β βββ graph.py # Main LangGraph workflow orchestration
β βββ state.py # Graph state definition
β βββ nodes/
β βββ smart_router.py # Routing agent (movies vs weather vs clarification)
β βββ intent_extractor.py # Intent classification agent
β βββ entity_extractor.py # Entity extraction agent
β βββ agent.py # Tool calling agent - SQL query generation and execution
β βββ weather_agent.py # Weather agent - MCP-based weather queries
βββ ui/
β βββ streamlit_app.py # Streamlit web interface for interactive chat
β βββ README.md # Streamlit UI documentation
βββ prompts/ # Prompt templates for LLM agents (.prompt files)
βββ utils/
β βββ config.py # Application configuration and settings
β βββ download.py # Dataset download utilities
β βββ logger.py # Logging configuration
βββ tests/ # Unit and integration tests
βββ test_api.py # FastAPI endpoint tests
βββ test_graph.py # LangGraph workflow tests
βββ test_weather_flow.py # Weather agent integration tests
mcp_server/
βββ weather_server.py # MCP Weather Server (HTTP/stdio transport)
Run the test suite:
# Using pytest
pytest tests/
# With coverage
pytest tests / --cov=convai --cov-report=htmlFor development with auto-reload:
# FastAPI with auto-reload
uvicorn convai.app:app --reload --host 0.0.0.0 --port 8000
# Streamlit with auto-reload (default behavior)
streamlit run convai/ui/streamlit_app.pyThe project follows Python best practices and uses:
- FastAPI for the REST API
- Streamlit for the web UI
- SQLite for database storage
- SQLAlchemy for database ORM
- LangChain and LangGraph for LLM orchestration
- MCP (Model Context Protocol) for extensible tool integration
- Pydantic for data validation
- Asyncio for asynchronous processing
The test suite includes:
- API endpoint tests (
test_api.py) - Graph workflow tests (
test_graph.py) with async support - Weather agent integration tests (
test_weather_flow.py)
If you encounter database errors:
- Ensure the database file
movielens.dbexists - Re-run the ingestion script (after deleting
movielens.dbif it exists):python -m convai.data.ingest
Ollama:
- Ensure Ollama is running:
ollama serve - Verify the model is available:
ollama list - Pull the model if missing:
ollama pull qwen3:8b - If you don't want to use
qwen3:8bmodel, make sure to pull and use any other "Tool Calling" compatible model
OpenAI:
- Set your API key:
export OPENAI_API_KEY=your_key_hereor addAPI_KEYto.env - Verify the model name is correct (e.g.,
gpt-4,gpt-3.5-turbo)
Groq:
- Get your API key from Groq Console
- Set in
.env:API_KEY=your_groq_api_key_here - Ensure
MODEL_PROVIDER=groqis set - Verify model name matches available Groq models
Weather queries not working:
- Ensure MCP server is running:
uv run python mcp_server/weather_server.py --transport http - Check
MCP_SERVERin.envmatches the server URL (default:http://127.0.0.1:8001/mcp) - Verify port 8001 is not in use by another process
Connection errors:
- For HTTP transport, ensure the server URL is correct
- For stdio transport, ensure Python is in your PATH
Port already in use:
streamlit run convai/ui/streamlit_app.py --server.port 8502Import errors:
- Ensure you're in the project root directory
- Activate virtual environment:
source .venv/bin/activate - Reinstall dependencies:
pip install -r requirements.txt
Async loop errors:
- Ensure Python 3.8+ is installed
- Update Streamlit:
pip install --upgrade streamlit
If default ports are already in use, change them:
FastAPI (default 8000):
- Update
PORTin.envfile - Or:
export PORT=8001
MCP Server (default 8001):
- Update
MCP_SERVERin.env - Modify
weather_server.pyport configuration
Streamlit (default 8501):
- Use
--server.portflag:streamlit run convai/ui/streamlit_app.py --server.port 8502
This project is provided as-is for demonstration purposes.
Mukesh Arambakam (amukesh.mk@gmail.com)