A collaborative decision-making platform where multiple AI agents discuss and reach consensus on user queries.
The AI Agent Council is a system where:
- Users can submit queries through a web interface
- Multiple AI agents independently analyze the query
- Agents engage in a discussion to share perspectives
- Agents collaboratively reach a consensus
- The final decision is presented back to the user
The system consists of:
- A Next.js web application with a modern UI
- Python-based AI agents that can be easily configured
- Ollama integration for local LLM inference using Llama3:8b
- Node.js 18+ and npm/yarn
- Python 3.10+
- Ollama installed locally
- Install on MacOS (Using Homebrew)
$ brew install ollama
-
Clone the repository:
git clone https://github.com/ninjonas/ai-council.git cd ai-council -
Set up the Next.js application:
cd web npm install npm run dev -
Set up the Python agents:
cd ../agents pip install -r requirements.txt python server.py -
The web interface will be available at
http://localhost:3000
To add a new agent:
-
Create a new folder in the
agents/agent_instancesdirectory:mkdir -p agents/agent_instances/your_agent_name/references -
Configure the agent by creating a
config.ymlin the new folder:name: "Your Agent Name" description: "What this agent specializes in" personality: "How this agent should communicate" expertise: ["area1", "area2"] temperature: 0.7 max_tokens: 1024 system_prompt: | As Your Agent Name, your approach should be...
-
Add reference materials (optional):
- Place PDF, Markdown or text files in the
agents/agent_instances/your_agent_name/referencesfolder - Agents will automatically incorporate these materials in their reasoning
- Place PDF, Markdown or text files in the
-
Restart the agent server:
cd agents python server.py
- Web Application: Next.js with TypeScript and Tailwind CSS
- Agent System: Python-based with WebSockets for real-time communication
- LLM Integration: Ollama with Llama3:8b for efficient local inference
- Data Exchange: Standardized JSON protocols for agent communication
Users can provide system-wide instructions that all agents will follow during their discussion. This is optional and can be left blank.
Each agent's config.yml supports:
name: Agent identifierdescription: Purpose descriptionpersonality: Communication styleexpertise: List of specialization areastemperature: Creativity level (0.0-1.0)max_tokens: Maximum response lengthsystem_prompt: Default instructions for this agent
MIT