A git-powered agentic coding orchestrator that coordinates multiple AI models to collaboratively plan, write, and evaluate code.
- Rust 1.70+ (
cargo --version) - Git initialized repository (
git init && git add . && git commit -m "initial") - At least one AI provider configured (see Configuration)
cargo build --release
./target/release/tentacle- Configure your agents in
roles.toml:
[[roles]]
name = "Planner_A"
kind = "planner"
provider = "Local"
model = "llama3.1:8b"
enabled = true
[[roles]]
name = "Dev_A"
kind = "developer"
provider = "Local"
model = "llama3.1:8b"
enabled = true
[arbiter]
name = "Arbiter"
provider = "Local"
model = "llama3.1:8b"- Set environment variables for your chosen providers:
# For local models (Ollama)
export LOCAL_BACKEND=ollama
# For cloud providers (optional)
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GEMINI_API_KEY=...- Start Ollama (if using local models):
ollama serve
ollama pull llama3.1:8b- Run Tentacle:
./target/release/tentacle- Enter your coding task in the TUI prompt and press Enter.
Tentacle creates a git workflow where each developer agent works on a separate branch:
- Planning Phase: Multiple planner agents propose approaches
- Developing Phase: Developer agents implement solutions on individual branches
- Arbitrating Phase: Arbitrator evaluates all solutions and decides merge strategy
- Merging Phase: Selected branches are merged to create final integrated solution
The terminal UI shows real-time progress, agent status, and git operations.
- Local: Ollama or LM Studio compatible servers
- OpenAI: GPT-4, GPT-3.5-turbo, etc.
- Claude: Claude-3.5-sonnet, haiku, etc.
- Gemini: Gemini Pro, Flash, etc.
# Ollama (default: http://localhost:11434)
LOCAL_BACKEND=ollama
OLLAMA_HOST=http://localhost:11434
# OpenAI-compatible servers (LM Studio, etc.)
LOCAL_BACKEND=compat
LOCAL_COMPAT_BASE=http://localhost:1234/v1
LOCAL_COMPAT_KEY=optional-key
# Cloud providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...See roles.toml for full configuration options including:
- Role types:
planner,developer,reviewer - Provider selection and model names
- Custom system prompts and temperature
- Enable/disable flags
Tentacle looks for an optional .tentacle/ directory in the project root at startup. When present, the contents are merged into every role's system prompt before API calls are issued.
.tentacle/
├── Agents.md # Global guidance applied to all roles
├── planner.md # Planner-specific instructions
├── developer.md # Developer-specific instructions
├── reviewer.md # Reviewer-specific instructions (optional)
└── arbiter.md # Arbitrator-specific instructions
Guidelines:
- Files are treated as UTF-8 text; invalid UTF-8 is ignored with a warning.
- Oversized files (>1 MiB) are skipped so you can keep large context notes elsewhere.
- Global content from
Agents.mdis appended to every role, followed by the matching role file if present. - Existing
system_promptvalues fromroles.tomlremain first, so you can layer project guidance without losing base persona tuning. - Diagnostics are emitted through the standard logger (
RUST_LOG=info ./target/release/tentacle) to help debug missing or malformed files.
Example Agents.md:
# Project Context
This service uses Axum and PostgreSQL. Prefer async/await patterns and propagate errors with anyhow::Result.
Example developer.md:
# Developer Guidelines
- Add unit tests for new code paths.
- Run `cargo fmt` before returning patches.
- Prefer tracing instrumentation over println! debugging.
Git Issues:
- Ensure clean working directory:
git status - Initialize if needed:
git init && git add . && git commit -m "initial"
API Failures:
- Check environment variables are set correctly
- Verify network connectivity to provider endpoints
- For Ollama: ensure service is running and models are pulled
No Output:
- Check that at least one planner and developer are
enabled = true - Verify models exist (for Ollama:
ollama list) - Look at progress panel for error details
For detailed architecture documentation, see Outline.md.
# Run in development mode
cargo run
# Run tests
cargo test
# Check code style
cargo fmt --check
cargo clippySee the issues/ directory for roadmap and planned features.