Have intelligent conversations with multiple AI agents that learn, remember, and collaborate
Quorum is an open-source platform that enables you to participate in natural conversations with multiple AI agents. Unlike other tools that require technical expertise or overwhelm with complexity, Quorum provides a simple, high-quality conversation experience where agents maintain context, avoid contradictions, and get smarter over time.
- Conversation Quality: Anti-contradiction detection, loop prevention, and real-time health scoring ensure productive discussions
- Intelligent Memory: Agents remember past conversations and learn your preferences, getting better with every interaction
- Zero-Config Start: No setup requiredβjust start typing and intelligent agents join the conversation
- For Everyone: Designed for non-technical users while offering advanced customization for power users
- Docker and Docker Compose (for backend services)
- Node.js 20+ (for frontend development)
- API keys for at least one LLM provider (Anthropic, OpenAI, Google, or Mistral)
- Clone the repository
git clone https://github.com/yourusername/quorum.git
cd quorum- Configure environment variables
cp .env.example .env
# Edit .env and add your API keys- Start the development environment
Option A: Quick Start Script (Recommended)
./scripts/dev.shThis script will:
- Start backend services in Docker (FastAPI + Redis)
- Wait for backend to be healthy
- Install frontend dependencies if needed
- Start the frontend dev server locally
Option B: Manual Start
# Terminal 1: Start backend services
docker-compose -f docker/development/docker-compose.yml up
# Terminal 2: Start frontend
cd frontend
npm install
npm run dev- Access the application
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- Stop services
# Stop backend services
./scripts/stop.sh
# Stop frontend: Press Ctrl+C in the frontend terminalPhase 1: Single-LLM Streaming Chat Interface β COMPLETE
- β Next.js 15 frontend with TypeScript strict mode
- β FastAPI backend with LiteLLM integration
- β Basic SSE streaming for single LLM
- β Zustand state management
- β Tailwind CSS + shadcn/ui components
- β Docker Compose deployment
Phase 2: Sequential Multi-LLM Debate Engine β COMPLETE
- β Sequential turn-based debate system (2-4 agents)
- β XState v5 state machine for robust state management
- β Custom system prompts per agent
- β Manual debate control (pause/resume/stop)
- β Real-time cost tracking
- β Formatted markdown summaries
- β Comprehensive backend tests (34/34 passing)
- β Python 3.9+ compatibility
Phase 3: Interactive Conversation Platform (MVP) π§ IN PROGRESS
- β Conversation Quality Management (anti-contradiction, loop detection, health scoring) - Backend complete
- β Agent rotation system - Fixed and fully operational
- π§ Intelligent Memory Architecture (three-tier memory, context retrieval, personalization)
- π§ Non-Technical UX (zero-config start, templates, agent personalities)
- π§ Frontend quality indicators integration
Development Environment
βββ Frontend (localhost:3000) - Runs locally
β βββ Next.js 15 + React 19
β βββ Zustand (state management)
β βββ npm run dev (hot-reload)
β βββ shadcn/ui (components)
β
βββ Backend (localhost:8000) - Docker Container
β βββ FastAPI + LiteLLM
β βββ SSE streaming
β βββ Multi-provider support
β
βββ Redis (localhost:6379) - Docker Container
βββ Rate limiting & caching
- Product Roadmap - Strategic direction and development phases
- MVP Scope - Detailed Phase 3 feature specifications
- Competitive Research - Market analysis and differentiation
- LLM Conversation Apps Research
Access the V2 debate interface at http://localhost:3000/debate-v2
Key Features:
- Configure 2-4 AI agents with custom system prompts
- Select from multiple LLM providers (OpenAI, Anthropic, Google, Mistral)
- Choose 1-5 rounds of sequential debate
- Manual control: pause, resume, or stop at any time
- Real-time cost and token tracking
- Formatted markdown summary export
Base URL: http://localhost:8000/api/v1/debates/v2
POST /api/v1/debates/v2
Content-Type: application/json
{
"topic": "Should AI development be open source?",
"participants": [
{
"name": "Agent 1",
"model": "gpt-4o",
"system_prompt": "You are an advocate for open source AI.",
"temperature": 0.7
},
{
"name": "Agent 2",
"model": "claude-3-5-sonnet-20241022",
"system_prompt": "You are a proponent of controlled AI development.",
"temperature": 0.7
}
],
"max_rounds": 2,
"context_window_rounds": 10,
"cost_warning_threshold": 1.0
}Response:
{
"id": "debate_v2_abc123",
"status": "initialized",
"current_round": 1,
"current_turn": 0,
"config": { ... },
"total_cost": 0.0
}GET /api/v1/debates/v2/{debate_id}/next-turn
Accept: text/event-streamStream Events:
event: participant_start
data: {"participant_name": "Agent 1", "model": "gpt-4o"}
event: chunk
data: {"text": "I believe that open source..."}
event: participant_complete
data: {"participant_name": "Agent 1", "tokens_used": 150, "cost": 0.002}
event: round_complete
data: {"round": 1, "total_cost": 0.005}
event: debate_complete
data: {"reason": "All rounds completed"}
GET /api/v1/debates/v2/{debate_id}POST /api/v1/debates/v2/{debate_id}/stopPOST /api/v1/debates/v2/{debate_id}/pausePOST /api/v1/debates/v2/{debate_id}/resumeGET /api/v1/debates/v2/{debate_id}/summaryResponse:
{
"debate_id": "debate_v2_abc123",
"topic": "Should AI development be open source?",
"status": "completed",
"rounds_completed": 2,
"total_rounds": 2,
"participants": ["Agent 1", "Agent 2"],
"participant_stats": [
{
"name": "Agent 1",
"model": "gpt-4o",
"total_tokens": 500,
"total_cost": 0.015,
"average_response_time_ms": 1200.0,
"response_count": 2
}
],
"total_cost": 0.030,
"markdown_transcript": "# Debate Transcript\n\n..."
}The V2 debate UI uses XState v5 for robust state management:
CONFIGURING β READY β RUNNING β COMPLETED
β β
ERROR PAUSED
β
RUNNING
States:
- CONFIGURING: User configures participants and rounds
- READY: Configuration validated, ready to start
- RUNNING: Debate in progress, streaming responses
- PAUSED: Debate paused, can resume
- COMPLETED: All rounds complete or manually stopped
- ERROR: Error occurred, can retry
Run the comprehensive test suite:
cd backend
python3 -m pytest tests/ -v
# Results: 34 tests passing
# - 13 API route tests
# - 13 Sequential debate service tests
# - 8 Summary service testsRecommended Setup:
- Frontend runs locally with
npm run devfor instant hot-reload - Backend services run in Docker for consistency
Quick Start:
./scripts/dev.shThe frontend runs locally for the best development experience:
cd frontend
npm install
npm run dev # Start dev server
npm run test # Run tests
npm run lint # Lint codeBackend runs in Docker with hot-reload enabled:
# Start backend
docker-compose -f docker/development/docker-compose.yml up
# View logs
docker-compose -f docker/development/docker-compose.yml logs -f backend
# Run backend tests (inside container)
docker exec quorum-backend-dev pytest --cov=app
# Or run locally (without Docker)
cd backend
pip install -r requirements.txt
pip install -r requirements-dev.txt
uvicorn app.main:app --reload# Frontend tests (local)
cd frontend
npm run test:coverage
# Backend tests (Docker)
docker exec quorum-backend-dev pytest --cov=app
# Backend tests (local)
cd backend
pytest --cov=app- Root
.env: Backend configuration (API keys, CORS, etc.) - Frontend
.env.local: Frontend-specific vars (auto-created by dev script)
# Start everything
./scripts/dev.sh
# Stop backend services
./scripts/stop.sh
# Rebuild backend container
docker-compose -f docker/development/docker-compose.yml up --build
# View backend logs
docker-compose -f docker/development/docker-compose.yml logs -f
# Reset everything
docker-compose -f docker/development/docker-compose.yml down -vSee ROADMAP.md for the complete strategic roadmap.
- β Phase 1: Single-LLM Chat Interface
- β Phase 2: Sequential Multi-LLM Debate System
- π§ Phase 3: Interactive Conversation Platform (MVP) (4-6 weeks)
- Conversation Quality Management
- Intelligent Memory Architecture
- Non-Technical User Experience
- π Phase 4: Enhanced Experience (Real-time streaming, consensus tools, collaboration)
- π Phase 5: Advanced Features (Voice, domain-specific agents, enterprise)
MIT License - see LICENSE for details
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
- GitHub Issues: Report bugs or request features
- Documentation: Full docs
Built with β€οΈ using Next.js, FastAPI, and LiteLLM