Skip to content

krackenservices/tentacle

Repository files navigation

Tentacle

A git-powered agentic coding orchestrator that coordinates multiple AI models to collaboratively plan, write, and evaluate code.

Quick Start

Prerequisites

  • Rust 1.70+ (cargo --version)
  • Git initialized repository (git init && git add . && git commit -m "initial")
  • At least one AI provider configured (see Configuration)

Installation

cargo build --release
./target/release/tentacle

Basic Usage

  1. Configure your agents in roles.toml:
[[roles]]
name = "Planner_A"
kind = "planner"
provider = "Local"
model = "llama3.1:8b"
enabled = true

[[roles]]
name = "Dev_A"
kind = "developer"
provider = "Local"
model = "llama3.1:8b"
enabled = true

[arbiter]
name = "Arbiter"
provider = "Local"
model = "llama3.1:8b"
  1. Set environment variables for your chosen providers:
# For local models (Ollama)
export LOCAL_BACKEND=ollama

# For cloud providers (optional)
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GEMINI_API_KEY=...
  1. Start Ollama (if using local models):
ollama serve
ollama pull llama3.1:8b
  1. Run Tentacle:
./target/release/tentacle
  1. Enter your coding task in the TUI prompt and press Enter.

How It Works

Tentacle creates a git workflow where each developer agent works on a separate branch:

  1. Planning Phase: Multiple planner agents propose approaches
  2. Developing Phase: Developer agents implement solutions on individual branches
  3. Arbitrating Phase: Arbitrator evaluates all solutions and decides merge strategy
  4. Merging Phase: Selected branches are merged to create final integrated solution

The terminal UI shows real-time progress, agent status, and git operations.

Configuration

Providers

  • Local: Ollama or LM Studio compatible servers
  • OpenAI: GPT-4, GPT-3.5-turbo, etc.
  • Claude: Claude-3.5-sonnet, haiku, etc.
  • Gemini: Gemini Pro, Flash, etc.

Environment Variables

# Ollama (default: http://localhost:11434)
LOCAL_BACKEND=ollama
OLLAMA_HOST=http://localhost:11434

# OpenAI-compatible servers (LM Studio, etc.)
LOCAL_BACKEND=compat
LOCAL_COMPAT_BASE=http://localhost:1234/v1
LOCAL_COMPAT_KEY=optional-key

# Cloud providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...

Agent Configuration

See roles.toml for full configuration options including:

  • Role types: planner, developer, reviewer
  • Provider selection and model names
  • Custom system prompts and temperature
  • Enable/disable flags

Prompt Customization (.tentacle/)

Tentacle looks for an optional .tentacle/ directory in the project root at startup. When present, the contents are merged into every role's system prompt before API calls are issued.

.tentacle/
├── Agents.md      # Global guidance applied to all roles
├── planner.md     # Planner-specific instructions
├── developer.md   # Developer-specific instructions
├── reviewer.md    # Reviewer-specific instructions (optional)
└── arbiter.md     # Arbitrator-specific instructions

Guidelines:

  • Files are treated as UTF-8 text; invalid UTF-8 is ignored with a warning.
  • Oversized files (>1 MiB) are skipped so you can keep large context notes elsewhere.
  • Global content from Agents.md is appended to every role, followed by the matching role file if present.
  • Existing system_prompt values from roles.toml remain first, so you can layer project guidance without losing base persona tuning.
  • Diagnostics are emitted through the standard logger (RUST_LOG=info ./target/release/tentacle) to help debug missing or malformed files.

Example Agents.md:

# Project Context
This service uses Axum and PostgreSQL. Prefer async/await patterns and propagate errors with anyhow::Result.

Example developer.md:

# Developer Guidelines
- Add unit tests for new code paths.
- Run `cargo fmt` before returning patches.
- Prefer tracing instrumentation over println! debugging.

Troubleshooting

Git Issues:

  • Ensure clean working directory: git status
  • Initialize if needed: git init && git add . && git commit -m "initial"

API Failures:

  • Check environment variables are set correctly
  • Verify network connectivity to provider endpoints
  • For Ollama: ensure service is running and models are pulled

No Output:

  • Check that at least one planner and developer are enabled = true
  • Verify models exist (for Ollama: ollama list)
  • Look at progress panel for error details

Architecture

For detailed architecture documentation, see Outline.md.

Development

# Run in development mode
cargo run

# Run tests
cargo test

# Check code style
cargo fmt --check
cargo clippy

Contributing

See the issues/ directory for roadmap and planned features.