Skip to content

krjordan/quorum

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

44 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Quorum - Interactive Multi-Agent Conversation Platform

Have intelligent conversations with multiple AI agents that learn, remember, and collaborate

Quorum is an open-source platform that enables you to participate in natural conversations with multiple AI agents. Unlike other tools that require technical expertise or overwhelm with complexity, Quorum provides a simple, high-quality conversation experience where agents maintain context, avoid contradictions, and get smarter over time.

🌟 What Makes Quorum Different

  • Conversation Quality: Anti-contradiction detection, loop prevention, and real-time health scoring ensure productive discussions
  • Intelligent Memory: Agents remember past conversations and learn your preferences, getting better with every interaction
  • Zero-Config Start: No setup requiredβ€”just start typing and intelligent agents join the conversation
  • For Everyone: Designed for non-technical users while offering advanced customization for power users

πŸš€ Quick Start

Prerequisites

  • Docker and Docker Compose (for backend services)
  • Node.js 20+ (for frontend development)
  • API keys for at least one LLM provider (Anthropic, OpenAI, Google, or Mistral)

Installation

  1. Clone the repository
git clone https://github.com/yourusername/quorum.git
cd quorum
  1. Configure environment variables
cp .env.example .env
# Edit .env and add your API keys
  1. Start the development environment

Option A: Quick Start Script (Recommended)

./scripts/dev.sh

This script will:

  • Start backend services in Docker (FastAPI + Redis)
  • Wait for backend to be healthy
  • Install frontend dependencies if needed
  • Start the frontend dev server locally

Option B: Manual Start

# Terminal 1: Start backend services
docker-compose -f docker/development/docker-compose.yml up

# Terminal 2: Start frontend
cd frontend
npm install
npm run dev
  1. Access the application
  1. Stop services
# Stop backend services
./scripts/stop.sh

# Stop frontend: Press Ctrl+C in the frontend terminal

πŸ“‹ Current Status: Phase 3 In Progress

Phase 1: Single-LLM Streaming Chat Interface βœ… COMPLETE

  • βœ… Next.js 15 frontend with TypeScript strict mode
  • βœ… FastAPI backend with LiteLLM integration
  • βœ… Basic SSE streaming for single LLM
  • βœ… Zustand state management
  • βœ… Tailwind CSS + shadcn/ui components
  • βœ… Docker Compose deployment

Phase 2: Sequential Multi-LLM Debate Engine βœ… COMPLETE

  • βœ… Sequential turn-based debate system (2-4 agents)
  • βœ… XState v5 state machine for robust state management
  • βœ… Custom system prompts per agent
  • βœ… Manual debate control (pause/resume/stop)
  • βœ… Real-time cost tracking
  • βœ… Formatted markdown summaries
  • βœ… Comprehensive backend tests (34/34 passing)
  • βœ… Python 3.9+ compatibility

Phase 3: Interactive Conversation Platform (MVP) 🚧 IN PROGRESS

  • βœ… Conversation Quality Management (anti-contradiction, loop detection, health scoring) - Backend complete
  • βœ… Agent rotation system - Fixed and fully operational
  • 🚧 Intelligent Memory Architecture (three-tier memory, context retrieval, personalization)
  • 🚧 Non-Technical UX (zero-config start, templates, agent personalities)
  • 🚧 Frontend quality indicators integration

πŸ—οΈ Architecture

Development Environment
β”œβ”€β”€ Frontend (localhost:3000) - Runs locally
β”‚   β”œβ”€β”€ Next.js 15 + React 19
β”‚   β”œβ”€β”€ Zustand (state management)
β”‚   β”œβ”€β”€ npm run dev (hot-reload)
β”‚   └── shadcn/ui (components)
β”‚
β”œβ”€β”€ Backend (localhost:8000) - Docker Container
β”‚   β”œβ”€β”€ FastAPI + LiteLLM
β”‚   β”œβ”€β”€ SSE streaming
β”‚   └── Multi-provider support
β”‚
└── Redis (localhost:6379) - Docker Container
    └── Rate limiting & caching

πŸ“š Documentation

Product & Strategy

Technical Documentation

🎯 Phase 2 Features: Sequential Multi-LLM Debates

Quick Start (V2 Debates)

Access the V2 debate interface at http://localhost:3000/debate-v2

Key Features:

  • Configure 2-4 AI agents with custom system prompts
  • Select from multiple LLM providers (OpenAI, Anthropic, Google, Mistral)
  • Choose 1-5 rounds of sequential debate
  • Manual control: pause, resume, or stop at any time
  • Real-time cost and token tracking
  • Formatted markdown summary export

API Endpoints (V2)

Base URL: http://localhost:8000/api/v1/debates/v2

1. Create Debate

POST /api/v1/debates/v2
Content-Type: application/json

{
  "topic": "Should AI development be open source?",
  "participants": [
    {
      "name": "Agent 1",
      "model": "gpt-4o",
      "system_prompt": "You are an advocate for open source AI.",
      "temperature": 0.7
    },
    {
      "name": "Agent 2",
      "model": "claude-3-5-sonnet-20241022",
      "system_prompt": "You are a proponent of controlled AI development.",
      "temperature": 0.7
    }
  ],
  "max_rounds": 2,
  "context_window_rounds": 10,
  "cost_warning_threshold": 1.0
}

Response:

{
  "id": "debate_v2_abc123",
  "status": "initialized",
  "current_round": 1,
  "current_turn": 0,
  "config": { ... },
  "total_cost": 0.0
}

2. Get Next Turn (SSE Stream)

GET /api/v1/debates/v2/{debate_id}/next-turn
Accept: text/event-stream

Stream Events:

event: participant_start
data: {"participant_name": "Agent 1", "model": "gpt-4o"}

event: chunk
data: {"text": "I believe that open source..."}

event: participant_complete
data: {"participant_name": "Agent 1", "tokens_used": 150, "cost": 0.002}

event: round_complete
data: {"round": 1, "total_cost": 0.005}

event: debate_complete
data: {"reason": "All rounds completed"}

3. Get Debate Status

GET /api/v1/debates/v2/{debate_id}

4. Stop Debate

POST /api/v1/debates/v2/{debate_id}/stop

5. Pause Debate

POST /api/v1/debates/v2/{debate_id}/pause

6. Resume Debate

POST /api/v1/debates/v2/{debate_id}/resume

7. Get Summary

GET /api/v1/debates/v2/{debate_id}/summary

Response:

{
  "debate_id": "debate_v2_abc123",
  "topic": "Should AI development be open source?",
  "status": "completed",
  "rounds_completed": 2,
  "total_rounds": 2,
  "participants": ["Agent 1", "Agent 2"],
  "participant_stats": [
    {
      "name": "Agent 1",
      "model": "gpt-4o",
      "total_tokens": 500,
      "total_cost": 0.015,
      "average_response_time_ms": 1200.0,
      "response_count": 2
    }
  ],
  "total_cost": 0.030,
  "markdown_transcript": "# Debate Transcript\n\n..."
}

State Machine (XState)

The V2 debate UI uses XState v5 for robust state management:

CONFIGURING β†’ READY β†’ RUNNING β†’ COMPLETED
                ↓         ↓
              ERROR     PAUSED
                         ↓
                      RUNNING

States:

  • CONFIGURING: User configures participants and rounds
  • READY: Configuration validated, ready to start
  • RUNNING: Debate in progress, streaming responses
  • PAUSED: Debate paused, can resume
  • COMPLETED: All rounds complete or manually stopped
  • ERROR: Error occurred, can retry

Backend Testing

Run the comprehensive test suite:

cd backend
python3 -m pytest tests/ -v

# Results: 34 tests passing
# - 13 API route tests
# - 13 Sequential debate service tests
# - 8 Summary service tests

πŸ› οΈ Development

Development Workflow

Recommended Setup:

  • Frontend runs locally with npm run dev for instant hot-reload
  • Backend services run in Docker for consistency

Quick Start:

./scripts/dev.sh

Frontend Development

The frontend runs locally for the best development experience:

cd frontend
npm install
npm run dev       # Start dev server
npm run test      # Run tests
npm run lint      # Lint code

Backend Development

Backend runs in Docker with hot-reload enabled:

# Start backend
docker-compose -f docker/development/docker-compose.yml up

# View logs
docker-compose -f docker/development/docker-compose.yml logs -f backend

# Run backend tests (inside container)
docker exec quorum-backend-dev pytest --cov=app

# Or run locally (without Docker)
cd backend
pip install -r requirements.txt
pip install -r requirements-dev.txt
uvicorn app.main:app --reload

Running Tests

# Frontend tests (local)
cd frontend
npm run test:coverage

# Backend tests (Docker)
docker exec quorum-backend-dev pytest --cov=app

# Backend tests (local)
cd backend
pytest --cov=app

Environment Variables

  • Root .env: Backend configuration (API keys, CORS, etc.)
  • Frontend .env.local: Frontend-specific vars (auto-created by dev script)

Useful Commands

# Start everything
./scripts/dev.sh

# Stop backend services
./scripts/stop.sh

# Rebuild backend container
docker-compose -f docker/development/docker-compose.yml up --build

# View backend logs
docker-compose -f docker/development/docker-compose.yml logs -f

# Reset everything
docker-compose -f docker/development/docker-compose.yml down -v

πŸ—ΊοΈ Roadmap

See ROADMAP.md for the complete strategic roadmap.

Completed:

  • βœ… Phase 1: Single-LLM Chat Interface
  • βœ… Phase 2: Sequential Multi-LLM Debate System

In Progress:

  • 🚧 Phase 3: Interactive Conversation Platform (MVP) (4-6 weeks)
    • Conversation Quality Management
    • Intelligent Memory Architecture
    • Non-Technical User Experience

Next:

  • πŸ“… Phase 4: Enhanced Experience (Real-time streaming, consensus tools, collaboration)
  • πŸ“… Phase 5: Advanced Features (Voice, domain-specific agents, enterprise)

πŸ“ License

MIT License - see LICENSE for details

🀝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

πŸ“ž Support


Built with ❀️ using Next.js, FastAPI, and LiteLLM

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •