A comprehensive AI-powered chat system built with modern microservices architecture, featuring LangGraph, FastAPI, and a beautiful Vue.js frontend. The system provides enterprise-grade chat capabilities with advanced AI integration, user management, and administrative controls.
Our chat system features a modern, responsive web interface built with Vue.js and Tailwind CSS:
Secure authentication system with JWT tokens and role-based access control
Comprehensive dashboard with system statistics, user activity, and quick access to all features
Intuitive chat interface with real-time messaging, conversation history, and AI-powered responses
Powerful administrative interface for user management, system monitoring, and configuration
π Try the live demo: [Coming Soon]
π API Documentation: Available at /docs
when running locally
- π€ Multiple LLM Providers: Support for OpenAI, DeepSeek, and other AI providers
- π§ Enhanced Graph Architecture: Advanced conversation flow with LangGraph
- π REST API Tools: AI can make external API calls during conversations
- πΎ Conversation History: Persistent chat sessions with context management
- β‘ Real-time Messaging: WebSocket support for instant communication
- ποΈ Conversation Management: Create, organize, and manage multiple chat sessions
- π JWT Authentication: Secure token-based authentication system
- π₯ Role-Based Access Control: User and admin roles with different permissions
- π User Registration: Self-service user registration with email validation
- π Password Management: Secure password change and reset functionality
- π€ Profile Management: User profiles with customizable information
- π System Dashboard: Comprehensive system statistics and monitoring
- π¨βπΌ User Management: Admin panel for managing users and permissions
- π Analytics: User activity tracking and system performance metrics
- βοΈ System Configuration: Configurable settings and system parameters
- ποΈ Data Export: Export user data and chat histories
- π§ Service Separation: Dedicated services for auth, chat, admin, and frontend
- π³ Docker Containerization: Full Docker support with docker-compose
- π API Gateway: Nginx-based API gateway with load balancing
- π‘ Service Communication: RESTful APIs between services
- π Scalable Design: Horizontal scaling support
- ποΈ PostgreSQL: Robust relational database for data persistence
- π Redis: Caching and session management
- π Database Migrations: Automated database schema management
- πΎ Data Backup: Automated backup and recovery procedures
- π Comprehensive Logging: Structured logging with rotation policies
- π Monitoring: Prometheus and Grafana integration
- π§ͺ Testing: Comprehensive test suite with pytest
- π CI/CD Ready: GitHub Actions and deployment automation
- π Documentation: Extensive documentation and API references
- π¨ Responsive Design: Mobile-first responsive interface
- π Dark/Light Mode: Theme switching support
- β‘ Fast Performance: Optimized Vue.js application with Vite
- π― Intuitive Navigation: User-friendly interface design
- π± Mobile Support: Full mobile device compatibility
- π¬ Real-time Chat: Live chat with typing indicators
- π Notifications: Toast notifications for user feedback
- π Data Tables: Advanced tables with sorting and filtering
- π Charts & Graphs: Visual data representation
- π Search & Filter: Advanced search capabilities
- π RESTful APIs: Well-documented REST endpoints
- π‘ WebSocket Support: Real-time bidirectional communication
- π Webhook Support: External system integration
- π OpenAPI/Swagger: Auto-generated API documentation
- π CORS Support: Cross-origin resource sharing
- π‘οΈ Security Headers: Comprehensive security configurations
- π¦ Rate Limiting: API rate limiting and abuse prevention
- β‘ Async Processing: Fully asynchronous implementation
- ποΈ Response Compression: Gzip compression for optimal performance
- π Data Encryption: Secure data handling and storage
.
βββ app/ # Main application package
β βββ __init__.py # Package initialization
β βββ main.py # FastAPI application entry point
β βββ config.py # Configuration management
β βββ api/ # API endpoints and models
β β βββ __init__.py
β β βββ models.py # Pydantic models for requests/responses
β β βββ routes.py # API route definitions
β βββ database/ # Database models and operations
β β βββ __init__.py
β β βββ models.py # SQLAlchemy database models
β β βββ session.py # Database session management
β βββ graph/ # LangGraph implementation
β β βββ __init__.py
β β βββ builder.py # Graph construction (basic, advanced, enhanced)
β β βββ nodes.py # Graph node implementations
β βββ services/ # Core services
β β βββ __init__.py
β β βββ api_tools.py # API tools service for external calls
β β βββ history.py # Conversation history service
β β βββ llm.py # LLM service
β β βββ webhook.py # Webhook service
β βββ utils/ # Utility modules
β βββ __init__.py
β βββ logging.py # Logging configuration
β βββ monitoring.py # Monitoring and LangSmith integration
β βββ tracking.py # Request tracking utilities
βββ docs/ # Documentation
β βββ api/ # API documentation
β β βββ chat-api.md # Chat API endpoints and models
β βββ ai/ # AI/LangGraph documentation
β β βββ langgraph-architecture.md # Graph architecture guide
β β βββ langsmith-integration.md # LangSmith integration guide
β β βββ deepseek-integration.md # DeepSeek integration guide
β βββ backend/ # Backend documentation
β β βββ architecture.md # System architecture
β β βββ services.md # Services documentation
β βββ database/ # Database documentation
β βββ schema.md # Database schema and models
βββ tests/ # Test files
β βββ test_enhanced_graph.py # Enhanced graph tests
β βββ test_deepseek_integration.py # DeepSeek integration tests
β βββ ... # Other test modules organized by feature
βββ requirements.txt # Consolidated project dependencies
βββ requirements-test.txt # Testing dependencies
- Python 3.9+
- OpenAI API key OR DeepSeek API key (or other LLM provider)
-
Clone the repository:
git clone https://github.com/artaasd95/chat-bot-practice-langchain.git cd chat-bot-practice-langchain
-
Create a virtual environment:
python -m venv venv # On Windows .\venv\Scripts\activate # On Unix/MacOS source venv/bin/activate
-
Install dependencies:
pip install -r requirements.txt
-
Create a
.env
file in the project root with your configuration:# API Settings API_HOST=0.0.0.0 API_PORT=8000 API_WORKERS=4 # CORS Settings CORS_ALLOW_ORIGINS=http://localhost:3000,http://localhost:8080 # LLM Settings - Choose your provider LLM_PROVIDER=openai # or "deepseek" for DeepSeek API LLM_MODEL=gpt-3.5-turbo # or "deepseek-chat" for DeepSeek LLM_TEMPERATURE=0.7 LLM_MAX_TOKENS=1000 # OpenAI Configuration OPENAI_API_KEY=your_openai_api_key # DeepSeek Configuration (alternative to OpenAI) DEEPSEEK_API_KEY=your_deepseek_api_key # Logging Settings LOG_LEVEL=INFO LOG_RETENTION=7 days LOG_ROTATION=100 MB # Webhook Settings WEBHOOK_MAX_RETRIES=3 WEBHOOK_RETRY_DELAY=2
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
gunicorn app.main:app -k uvicorn.workers.UvicornWorker -w 4 --bind 0.0.0.0:8000
curl http://localhost:8000/health
The enhanced chat endpoint provides full conversation history and API tool calling capabilities:
curl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Hello, how are you?"}],
"conversation_id": "optional-session-id"
}'
Response includes enhanced metadata:
{
"response": "Hello! I'm doing well, thank you for asking.",
"request_id": "req_123456",
"conversation_id": "conv_789",
"metadata": {
"api_calls_made": 0,
"history_loaded": true,
"messages_in_context": 5,
"api_call_details": null
}
}
curl -X POST http://localhost:8000/api/chat/direct \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "Hello, how are you?"}]}'
curl -X POST http://localhost:8000/api/chat/webhook \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Tell me about LangGraph"}],
"callback_url": "https://your-callback-url.com/webhook"
}'
curl http://localhost:8000/api/chat/webhook/status/{track_id}
The enhanced graph can automatically detect when the LLM wants to make external API calls and execute them:
# Example conversation that triggers API call
user_message = "Get the current weather in New York"
# LLM response: "API_CALL: GET https://api.weather.com/v1/current?location=New+York"
# System automatically makes the API call and incorporates the response
Conversations are automatically saved and loaded:
- Automatic Loading: Previous messages are loaded when a
conversation_id
is provided - Context Management: The system maintains conversation context across sessions
- Persistent Storage: All conversations are stored in PostgreSQL database
- Basic Graph: Simple linear flow for standard conversations
- Advanced Graph: Extensible graph with custom nodes
- Conditional Graph: Smart routing based on message content
- Enhanced Graph: Full-featured graph with API tools and history management
The system is designed to be easily extensible. To add new nodes to the graph:
- Define new node functions in
app/graph/nodes.py
- Update the graph builder in
app/graph/builder.py
to include your new nodes - Modify the API routes as needed to support new functionality
Example of adding a new node:
# In app/graph/nodes.py
async def my_new_node(state: GraphState) -> GraphState:
# Process state
return updated_state
# In app/graph/builder.py
async def build_custom_graph(llm: BaseLLM) -> StateGraph:
# ... existing code ...
graph.add_node("my_new_node", my_new_node)
graph.add_edge("generate", "my_new_node")
graph.add_edge("my_new_node", "postprocess")
# ... rest of the code ...
Comprehensive documentation is available in the docs/
directory:
- API Documentation: Complete API reference with examples
- Architecture Guide: Detailed system architecture and design patterns
- Services Documentation: In-depth service descriptions and implementations
- Database Schema: Database models and relationships
- LangGraph Architecture: Graph design patterns and node implementations
- LangSmith Integration: Comprehensive guide for tracing, monitoring, debugging, and evaluating LLM applications
- DeepSeek Integration: Guide for using DeepSeek models as an alternative LLM provider
MIT