Skip to content

versionHQ/multi-agent-system

Repository files navigation

Overview

DL MIT license Publisher PyPI python ver pyenv ver

Agentic orchestration framework for multi-agent networks and task graphs for complex task automation.

Visit:


Table of Content


Key Features

versionhq is a Python framework for agent networks that handle complex task automation without human interaction.

Agents are model-agnostic, and will improve task output, while oprimizing token cost and job latency, by sharing their memory, knowledge base, and RAG tools with other agents in the network.

Agent Network

Agents adapt their formation based on task complexity.

You can specify a desired formation or allow the agents to determine it autonomously (default).

Solo Agent Supervising Squad Random
Formation solo solo solo solo
Usage
  • A single agent with tools, knowledge, and memory.
  • When self-learning mode is on - it will turn into Random formation.
  • Leader agent gives directions, while sharing its knowledge and memory.
  • Subordinates can be solo agents or networks.
  • Share tasks, knowledge, and memory among network members.
  • A single agent handles tasks, asking help from other agents without sharing its memory or knowledge.
Use case An email agent drafts promo message for the given audience. The leader agent strategizes an outbound campaign plan and assigns components such as media mix or message creation to subordinate agents. An email agent and social media agent share the product knowledge and deploy multi-channel outbound campaign. 1. An email agent drafts promo message for the given audience, asking insights on tones from other email agents which oversee other clusters. 2. An agent calls the external agent to deploy the campaign.

Graph Theory Concept

To completely automate task workflows, agents will build a task-oriented network by generating nodes that represent tasks and connecting them with dependency-defining edges.

Each node is triggered by specific events and executed by an assigned agent once all dependencies are met.

While the network automatically reconfigures itself, you retain the ability to direct the agents using should_reform variable.

The following code snippet demonstrates the TaskGraph and its visualization, saving the diagram to the uploads directory.

import versionhq as vhq

task_graph = vhq.TaskGraph(directed=False, should_reform=True) # triggering auto formation

task_a = vhq.Task(description="Research Topic")
task_b = vhq.Task(description="Outline Post")
task_c = vhq.Task(description="Write First Draft")

node_a = task_graph.add_task(task=task_a)
node_b = task_graph.add_task(task=task_b)
node_c = task_graph.add_task(task=task_c)

task_graph.add_dependency(
   node_a.identifier, node_b.identifier,
   dependency_type=vhq.DependencyType.FINISH_TO_START, weight=5, description="B depends on A"
)
task_graph.add_dependency(
   node_a.identifier, node_c.identifier,
   dependency_type=vhq.DependencyType.FINISH_TO_FINISH, lag=1, required=False, weight=3
)

# To visualize the graph:
task_graph.visualize()

# To start executing nodes:
latest_output, outputs = task_graph.activate()

assert isinstance(last_task_output, vhq.TaskOutput)
assert [k in task_graph.nodes.keys() and v and isinstance(v, vhq.TaskOutput) for k, v in outputs.items()]

Task Graph

A TaskGraph represents tasks as nodes and their execution dependencies as edges, automating rule-based execution.

Agent Networks can handle TaskGraph objects by optimizing their formations.

The following example demonstrates a simple concept of a supervising agent network handling a task graph with three tasks and one critical edge.


Optimization

Agents are model-agnostic and can handle multiple tasks, leveraging their own and their peers' knowledge sources, memories, and tools.

Agents are optimized during network formation, but customization is possible before or after.

The following code snippet demonstrates agent customization:

import versionhq as vhq

agent = vhq.Agent(
   role="Marketing Analyst",
   goal="my amazing goal"
) # assuming this agent was created during the network formation

# update the agent
agent.update(
   llm="gemini-2.0", # updating LLM (Valid llm_config will be inherited to the new LLM.)
   tools=[vhq.Tool(func=lambda x: x)], # adding tools
   max_rpm=3,
   knowledge_sources=["<KC1>", "<KS2>"], # adding knowledge sources. This will trigger the storage creation.
   memory_config={"user_id": "0001"}, # adding memories
   dummy="I am dummy" # <- invalid field will be automatically ignored
)

Quick Start

Package installation

pip install versionhq

(Python 3.11 / 3.12)

Forming a agent network

import versionhq as vhq

network = vhq.form_agent_network(
   task="YOUR AMAZING TASK OVERVIEW",
   expected_outcome="YOUR OUTCOME EXPECTATION",
)
res = network.launch()

This will form a network with multiple agents on Formation and return TaskOutput object with output in JSON, plane text, Pydantic model format with evaluation.

Executing tasks

You can simply build an agent using Agent model and execute the task using Task class.

By default, agents prioritize JSON over plane text outputs.

import versionhq as vhq
from pydantic import BaseModel

class CustomOutput(BaseModel):
   test1: str
   test2: list[str]

def dummy_func(message: str, test1: str, test2: list[str]) -> str:
   return f"""{message}: {test1}, {", ".join(test2)}"""

task = vhq.Task(
   description="Amazing task",
   pydantic_output=CustomOutput,
   callback=dummy_func,
   callback_kwargs=dict(message="Hi! Here is the result: ")
)

res = task.execute(context="amazing context to consider.")
print(res)

This will return a TaskOutput object that stores response in plane text, JSON, and Pydantic model: CustomOutput formats with a callback result, tool output (if given), and evaluation results (if given).

res == TaskOutput(
   task_id=UUID('<TASK UUID>'),
   raw='{\"test1\":\"random str\", \"test2\":[\"str item 1\", \"str item 2\", \"str item 3\"]}',
   json_dict={'test1': 'random str', 'test2': ['str item 1', 'str item 2', 'str item 3']},
   pydantic=<class '__main__.CustomOutput'>,
   tool_output=None,
   callback_output='Hi! Here is the result: random str, str item 1, str item 2, str item 3', # returned a plain text summary
   evaluation=None
)

Supervising

To create an agent network with one or more manager agents, designate members using the is_manager tag.

import versionhq as vhq

agent_a = vhq.Agent(role="agent a", goal="My amazing goals", llm="llm-of-your-choice")
agent_b = vhq.Agent(role="agent b", goal="My amazing goals", llm="llm-of-your-choice")

task_1 = vhq.Task(
   description="Analyze the client's business model.",
   response_fields=[vhq.ResponseField(title="test1", data_type=str, required=True),],
   allow_delegation=True
)

task_2 = vhq.Task(
   description="Define a cohort.",
   response_fields=[vhq.ResponseField(title="test1", data_type=int, required=True),],
   allow_delegation=False
)

network =vhq.AgentNetwork(
   members=[
      vhq.Member(agent=agent_a, is_manager=False, tasks=[task_1]),
      vhq.Member(agent=agent_b, is_manager=True, tasks=[task_2]), # Agent B as a manager
   ],
)
res = network.launch()

assert isinstance(res, vhq.NetworkOutput)
assert not [item for item in task_1.processed_agents if "vhq-Delegated-Agent" == item]
assert [item for item in task_1.processed_agents if "agent b" == item]

This will return a list with dictionaries with keys defined in the ResponseField of each task.

Tasks can be delegated to a manager, peers within the agent network, or a completely new agent.


Technologies Used

Schema, Data Validation

  • Pydantic: Data validation and serialization library for Python.
  • Upstage: Document processer for ML tasks. (Use Document Parser API to extract data from documents)
  • Docling: Document parsing

Workflow, Task Graph

  • NetworkX: A Python package to analyze, create, and manipulate complex graph networks. Ref. Gallary
  • Matplotlib: For graph visualization.
  • Graphviz: For graph visualization.

LLM Curation

  • LiteLLM: LLM orchestration platform

Tools

  • Composio: Conect RAG agents with external tools, Apps, and APIs to perform actions and receive triggers. We use tools and RAG tools from Composio toolset.

Storage

  • mem0ai: Agents' memory storage and management.
  • Chroma DB: Vector database for storing and querying usage data.
  • SQLite: C-language library to implements a small SQL database engine.

Deployment

  • Python: Primary programming language. v3.12.x is recommended
  • uv: Python package installer and resolver
  • pre-commit: Manage and maintain pre-commit hooks
  • setuptools: Build python modules

Project Structure

.
.github
└── workflows/                # Github actions
│
docs/                         # Documentation
mkdocs.yml                    # MkDocs config
│
src/
└── versionhq/                # Orchestration framework package
│     ├── agent/              # Core components
│     └── llm/
│     └── task/
│     └── tool/
│     └── ...
│
└──tests/                     # Pytest - by core component and use cases in the docs
│     └── agent/
│     └── llm/
│     └── ...
│
└── .diagrams/  [.gitignore]  # Local directory to store graph diagrams
│
└── .logs/      [.gitignore]  # Local directory to store error/warning logs for debugging
│
│
pyproject.toml                # Project config
.env.sample                   # sample .env file


Setting Up Your Project

Installing package manager

For MacOS:

brew install uv

For Ubuntu/Debian:

sudo apt-get install uv

Installing dependencies

uv venv
source .venv/bin/activate
uv lock --upgrade
uv sync --all-extras
  • AssertionError/module mismatch errors: Set up default Python version using .pyenv

    pyenv install 3.12.8
    pyenv global 3.12.8  (optional: `pyenv global system` to get back to the system default ver.)
    uv python pin 3.12.8
    echo 3.12.8 >> .python-version
    
  • pygraphviz related errors: Run the following commands:

    brew install graphviz
    uv pip install --config-settings="--global-option=build_ext" \
    --config-settings="--global-option=-I$(brew --prefix graphviz)/include/" \
    --config-settings="--global-option=-L$(brew --prefix graphviz)/lib/" \
    pygraphviz
    
    • If the error continues, skip pygraphviz installation by:
    uv sync --all-extras --no-extra pygraphviz
    
  • torch/Docling related errors: Set up default Python version either 3.11.x or 3.12.x (same as AssertionError)

Adding env secrets to .env file

Create .env file in the project root and add secret vars following .env.sample file.


Contributing

versionhq is a open source project.

Steps

  1. Create your feature branch (git checkout -b feature/your-amazing-feature)

  2. Create amazing features

  3. Add a test funcition to the tests directory and run pytest.

    • Add secret values defined in .github/workflows/run_test.yml to your Github repository secrets located at settings > secrets & variables > Actions.

    • Run a following command:

      uv run pytest tests -vv --cache-clear
      

    Building a new pytest function

    • Files added to the tests directory must end in _test.py.

    • Test functions within the files must begin with test_.

    • Pytest priorities are 1. playground demo > 2. docs use cases > 3. other features

  4. Update docs accordingly.

  5. Pull the latest version of source code from the main branch (git pull origin main) *Address conflicts if any.

  6. Commit your changes (git add . / git commit -m 'Add your-amazing-feature')

  7. Push to the branch (git push origin feature/your-amazing-feature)

  8. Open a pull request

Optional

  • Flag with #! REFINEME for any improvements needed and #! FIXME for any errors.

  • Playground is available at https://versi0n.io.

Package Management with uv

  • Add a package: uv add <package>
  • Remove a package: uv remove <package>
  • Run a command in the virtual environment: uv run <command>
  • After updating dependencies, update requirements.txt accordingly or run uv pip freeze > requirements.txt

Pre-Commit Hooks

  1. Install pre-commit hooks:

    uv run pre-commit install
    
  2. Run pre-commit checks manually:

    uv run pre-commit run --all-files
    

Pre-commit hooks help maintain code quality by running checks for formatting, linting, and other issues before each commit.

  • To skip pre-commit hooks
    git commit --no-verify -m "your-commit-message"
    

Documentation

  • To edit the documentation, see docs repository and edit the respective component.

  • We use mkdocs to update the docs. You can run the docs locally at http://127.0.0.1:8000/.

    uv run python3 -m mkdocs serve --clean
    
  • To add a new page, update mkdocs.yml in the root. Refer to MkDocs documentation for more details.


Trouble Shooting

Common issues and solutions:

  • API key errors: Ensure all API keys in the .env file are correct and up to date. Make sure to add load_dotenv() on the top of the python file to apply the latest environment values.

  • Database connection issues: Check if the Chroma DB is properly initialized and accessible.

  • Memory errors: If processing large contracts, you may need to increase the available memory for the Python process.

  • Issues related to the Python version: Docling/Pytorch is not ready for Python 3.13 as of Jan 2025. Use Python 3.12.x as default by running uv venv --python 3.12.8 and uv python pin 3.12.8.

  • Issues related to dependencies: rm -rf uv.lock, uv cache clean, uv venv, and run uv pip install -r requirements.txt -v.

  • Issues related to agents and other systems: Check .logs directory located in the root directory for error messages and stack traces.

  • Issues related to Python quit unexpectedly: Check this stackoverflow article.

  • reportMissingImports error from pyright after installing the package: This might occur when installing new libraries while VSCode is running. Open the command pallete (ctrl + shift + p) and run the Python: Restart language server task.


Frequently Asked Questions (FAQ)

Q. Where can I see if the agent is working?

A. Visit playground.