This Streamlit application showcases the Mixture of Agents (MOA) architecture proposed by Together AI, powered by any Ollama compatible API (including Ollama). It allows users to interact with a configurable multi-agent system for enhanced AI-driven conversations.
Agent responses (layer agents):
Final (mixture) response (main model):
Source: Adaptation of Together AI Blog - Mixture of Agents
- Interactive chat interface powered by MOA
- Configurable main model and layer agents
- Real-time streaming of responses
- Visualisation of intermediate layer outputs
- Customisable agent parameters through the UI
-
Clone the repository:
git clone https://github.com/sammcj/moa.git cd moa
-
Install the required dependencies:
pip install -r requirements.txt
-
Set up your environment variables: Create a
.env
file in the root directory and add your Ollama compatible base URL and API key:OLLAMA_HOST=http://localhost:11434 OLLAMA_API_KEY=ollama
-
Run the Streamlit app:
streamlit run app.py
-
Open your web browser and navigate to the URL provided by Streamlit (usually
http://localhost:8501
). -
Use the sidebar to configure the MOA settings:
- Select the main model
- Set the number of cycles
- Customize the layer agent configuration
-
Start chatting with the MOA system using the input box at the bottom of the page.
app.py
: Main Streamlit application filemoa/
: Package containing the MOA implementation__init__.py
: Package initializermoa.py
: Core MOA agent implementationprompts.py
: System prompts for the agents
main.py
: CLI version of the MOA chat interfacerequirements.txt
: List of Python dependenciesstatic/
: Directory for static assets (images, etc.)
The MOA system can be configured through the Streamlit UI or by modifying the default configuration in app.py
. The main configurable parameters are:
- Main model: The primary language model used for generating final responses
- Number of cycles: How many times the layer agents are invoked before the main agent
- Layer agent configuration: A JSON object defining the system prompts, model names, and other parameters for each layer agent
Contributions to this project are welcome! Please follow these steps to contribute:
- Fork the repository
- Create a new branch for your feature or bug fix
- Make your changes and commit them with descriptive commit messages
- Push your changes to your fork
- Submit a pull request to the main repository
Please ensure that your code adheres to the project's coding standards and includes appropriate tests and documentation.
- The project was forked from skapadia3214/moa
- groq-moa was stated to be licensed under the MIT License - but it's LICENSE.md file contained the license for Apache 2.0.
- skapadia3214 for their groq based MOA implementation
- Ollama for providing the Ollama compatible API
- Together AI for proposing the Mixture of Agents architecture and providing the conceptual image
- Streamlit for the web application framework
This project implements the Mixture-of-Agents architecture proposed in the following paper:
@article{wang2024mixture,
title={Mixture-of-Agents Enhances Large Language Model Capabilities},
author={Wang, Junlin and Wang, Jue and Athiwaratkun, Ben and Zhang, Ce and Zou, James},
journal={arXiv preprint arXiv:2406.04692},
year={2024}
}
For more information about the Mixture-of-Agents concept, please refer to the original research paper and the Together AI blog post.