Skip to content

A multi-agent system for collaborative novel creation using a single shared LLM instance

Notifications You must be signed in to change notification settings

GrannyProgramming/multi-agent-llm-novel-creation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multi-Agent LLM Novel Creation System

This project implements a multi-agent system for collaborative novel creation using a single shared LLM instance (Mistral-7B). The system features multiple specialized agents working together to generate, refine, and evaluate narrative content.

System Architecture

The system follows a hierarchical agent architecture with specialized roles:

BaseAgent (Abstract)
├── ManagementAgent - Orchestrates workflow and coordinates other agents
├── ContentAgent (Abstract)
│   ├── SpecificationsAgent - Defines requirements and story specifications
│   ├── ProductionAgent - Generates draft content
│   ├── WritingAgent - Polishes prose
│   └── EvaluationAgent - Reviews and scores content quality
├── ConsistencyAgent (Abstract)
│   ├── IntegrationAgent - Ensures narrative coherence
│   ├── DeduplicationAgent - Removes duplicates
│   └── RedundancyAgent - Flags repeated ideas
└── SupportAgent (Abstract)
    ├── ChroniclerAgent - Logs creative decisions
    └── ResearcherAgent - Conducts research

Key Features

  • Resource Efficient: All agents share a single LLM instance (Mistral-7B with 4-bit quantization)
  • Structured Communication: Agents communicate via standardized message protocol
  • Vector Memory: Persistent vector database using Chroma for shared knowledge
  • Workflow-Based: Tasks are organized into sequential workflows with conditional branching
  • Tiered Memory: Optimization for minimal GPU memory usage

Getting Started

Prerequisites

  • Python 3.8+
  • Minimum 8GB RAM
  • GPU with at least 5GB VRAM (optional, but recommended)

Installation

  1. Clone the repository
  2. Run the setup script:
python src/setup.py

The setup script performs the following operations:

  • Installs Poetry for Python dependency management
  • Creates the necessary folder structure for the project
  • Installs all required dependencies (including handling CUDA if available)
  • Sets up configuration files
  • Runs installation tests to verify the setup

Setup Script Options

The setup script accepts several command-line arguments:

python src/setup.py --dev --use-bash

Available options:

  • --dev: Includes development dependencies
  • --use-bash: Uses bash shell for commands (useful for WSL)

For Windows users with WSL, the --use-bash flag is particularly helpful:

python src/setup.py --use-bash

Running the System

After setup is complete, you can run the system with Poetry:

# From the project root
poetry run python src/main.py

If you used the --use-bash flag during setup:

bash -c 'poetry run python src/main.py'

Configuration Options

The system can be customized using command-line arguments:

poetry run python src/main.py --model mistralai/Mistral-7B-Instruct-v0.2 --data-dir ./data --log-dir ./logs --device cuda --debug

Verifying Installation

You can verify your installation at any time by running the test script:

poetry run python src/tests/test_installation.py

This will check for all required dependencies and verify CUDA availability if applicable.

Options:

  • --model: LLM model path or HuggingFace identifier
  • --data-dir: Directory for data storage
  • --log-dir: Directory for logs
  • --device: Device to run on (cuda, cpu)
  • --debug: Enable debug logging

Project Structure

  • book_creator/ - Core implementation modules
    • agents/ - Agent implementations
    • memory/ - Vector database functionality
    • models/ - Model management
    • utils/ - Utility functions
  • docs/ - Documentation
  • data/ - Data storage
  • logs/ - Log files

Currently Implemented Agents

  • ManagementAgent: Orchestrates workflows and coordinates other agents
  • SpecificationsAgent: Defines story specifications, characters, and plot points

Development Status

This project is currently in active development. Additional agents and functionality will be added in future releases.

Implementation Details

Resource Optimization

  • Single shared LLM instance across all agents
  • 4-bit quantization for minimum VRAM usage
  • Sequential agent execution to share model instance
  • Efficient context management to minimize token usage

Message Protocol

All agent communication follows a standardized JSON message protocol:

{
  "message_id": "uuid-string",
  "timestamp": "ISO-8601-datetime",
  "sender": "AgentName",
  "recipient": "AgentName",
  "message_type": "request|response|notification|error",
  "content": {
    "action": "action_name",
    "parameters": {},
    "context": [],
    "payload": "The actual content or instruction"
  },
  "metadata": {
    "priority": 1-5,
    "workflow_id": "uuid-string",
    "parent_message_id": "uuid-string-if-response",
    "trace_id": "uuid-for-tracing"
  }
}

Workflows

Tasks are organized into workflows with sequential steps. Each step specifies:

  • Target agent
  • Action to perform
  • Parameters
  • Next steps

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

A multi-agent system for collaborative novel creation using a single shared LLM instance

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages