Multi-agent orchestration made simple. Drop in agents, watch the magic happen.
pip install agentrylab
# Comedy gold
agentrylab run standup_club.yaml --objective "remote work" --max-iters 4
# Real debates with evidence
agentrylab run debates.yaml --objective "Should we colonize Mars?" --max-iters 4
# Facebook Marketplace deals finder
agentrylab run marketplace_deals.yaml --objective "MacBook Pro deals"
5 killer presets that actually work:
Preset | What It Does | Cool Factor |
---|---|---|
π€ Stand-Up Club | Two comedians + MC | Comedy gold, pure entertainment |
ποΈ Debates | Pro/con + evidence search | Real web research, actual citations |
π¬ Research | Scientists collaborate | Academic rigor meets AI |
π€ Research Assistant | Interactive research chat | Human-in-the-loop web research |
π Marketplace Deals | Facebook Marketplace finder | Real listings, real URLs, real deals |
- Agents: Roles that speak (comedian, scientist, debater β no limits!)
- Tools: Real integrations (DuckDuckGo search, Facebook Marketplace, Wolfram Alpha)
- Providers: LLM backends (OpenAI, Ollama)
- Schedulers: Who talks when (round-robin, every-N)
# Install
pip install agentrylab
# Optional: Local models with Ollama
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3
# Optional: API keys in .env
echo "OPENAI_API_KEY=sk-..." >> .env
echo "APIFY_API_TOKEN=apify_..." >> .env
agentrylab run standup_club.yaml --objective "AI taking over the world" --max-iters 6
agentrylab run research.yaml --objective "quantum biology breakthroughs"
# Start conversation
agentrylab run research_assistant.yaml --objective "latest AI developments"
# Jump in anytime
agentrylab say research_assistant.yaml demo "What about quantum computing?"
agentrylab run research_assistant.yaml --thread-id demo --resume --max-iters 1
agentrylab run marketplace_deals.yaml --objective "iPhone 15 Pro deals in NYC"
# With structured inputs (user_inputs) non-interactively
agentrylab run marketplace_deals.yaml \
--params '{"query":"MacBook Pro 14 M3","location":"Tel Aviv","min_price":5000,"max_price":12000}'
- Presets may declare a
user_inputs
section. If required inputs are missing when starting a conversation via the Telegram adapter, the conversation entersCOLLECTING
status until inputs are provided. - Adapter helpers:
provide_user_param(conversation_id, key, value)
: supply a single input; returns remaining keysfinalize_params_and_start(conversation_id)
: substitute values, initialize the lab, transition toACTIVE
This enables progressive, chat-like forms for complex scenarios (e.g., location, radius, min/max price) while still supporting one-shot runs with --params
.
from agentrylab import init
# Start a comedy show
lab = init("standup_club.yaml", experiment_id="comedy-night")
lab.run(rounds=6)
# Check out the show
for msg in lab.state.history:
print(f"[{msg['role']}]: {msg['content']}")
- Real-time streaming: Watch agents work live
- Resume anywhere: Pick up where you left off
- Tool budgets: Prevent runaway API costs
- Human-in-the-loop: Jump into conversations anytime
- Persistence: Everything saved to SQLite + JSONL
- CLI Reference - All commands
- Configuration - YAML preset format
- Architecture - How it works
- Persistence - Data storage format
We welcome contributions! See CONTRIBUTING.md for details.
Quick wins:
- New presets (comedy, debates, research, etc.)
- New tools (APIs, databases, etc.)
- New providers (Claude, Gemini, etc.)
MIT - Go build something amazing!
Made with β€οΈ for the AI community. Because single agents are boring. π€