Multi-model subagent tool for Claude Code — spawn parallel conversations with Gemini, GPT-4, o3, and fold results back.
Sidecar extends Claude Code with the ability to delegate tasks to other LLMs. Think of it as "fork & fold" for AI conversations:
- Fork: Spawn a sidecar with a different model (Gemini, GPT-4, o3, etc.)
- Work: The sidecar investigates independently (interactive or headless)
- Fold: Results summarize back into your Claude Code context
- Use the right model for the job: Gemini for large context, o3 for reasoning, GPT-4 for specific tasks
- Keep context clean: Deep explorations stay in the sidecar, only the summary returns
- Work in parallel: Background sidecars while you continue with Claude Code
npm install -g claude-sidecar- Node.js 18+
- OpenCode CLI (
npm install -g opencode-ai) — the engine that powers sidecars
Choose one of these options:
Option A: OpenRouter (recommended for multi-model access)
# Interactive setup
npx opencode-ai
# In the OpenCode UI, type: /connect
# Select "OpenRouter" and paste your API key
# Or create auth file directly
mkdir -p ~/.local/share/opencode
echo '{"openrouter": {"apiKey": "sk-or-v1-YOUR_KEY"}}' > ~/.local/share/opencode/auth.jsonOption B: Direct API keys
export GEMINI_API_KEY=your-google-api-key # For Google models
export OPENAI_API_KEY=your-openai-api-key # For OpenAI models
export ANTHROPIC_API_KEY=your-anthropic-key # For Anthropic models# Interactive sidecar with Gemini (via OpenRouter)
sidecar start \
--model openrouter/google/gemini-2.5-pro \
--briefing "Debug the auth race condition in TokenManager.ts"
# Headless (autonomous) test generation with direct API key
sidecar start \
--model google/gemini-2.5-flash \
--briefing "Generate Jest tests for src/utils/" \
--headlessThe model name format determines which authentication is used:
| Access Method | Model Format | Example |
|---|---|---|
| OpenRouter | openrouter/provider/model |
openrouter/google/gemini-2.5-flash |
| Direct Google API | google/model |
google/gemini-2.5-flash |
| Direct OpenAI API | openai/model |
openai/gpt-4o |
| Direct Anthropic API | anthropic/model |
anthropic/claude-sonnet-4 |
| Command | Description |
|---|---|
sidecar start |
Launch a new sidecar |
sidecar list |
Show previous sidecars |
sidecar resume <id> |
Reopen a previous sidecar |
sidecar continue <id> |
New sidecar building on previous |
sidecar read <id> |
Output sidecar summary/conversation |
On install, a Skill is automatically added to ~/.claude/skills/sidecar/. This teaches Claude Code:
- When to spawn sidecars
- How to write effective briefings
- How to pass session context
- How to act on sidecar results
Claude Code will automatically know how to use sidecars after installation.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Claude Code │────▶│ Sidecar CLI │────▶│ OpenCode │
│ │ │ │ │ (Gemini/GPT) │
│ "Debug this" │ │ • Parse context │ │ │
│ │ │ • Build prompt │ │ [Interactive │
│ │◀────│ • Return summary│◀────│ or Headless] │
│ [Has summary] │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
- Claude Code invokes
sidecar startwith a briefing - Sidecar CLI extracts context from Claude Code's conversation
- Opens OpenCode with the specified model
- User works interactively (or headless runs autonomously)
- On "Fold", summary is generated and returned via stdout
- Claude Code receives the summary and can act on it
| Model | Name | Best For |
|---|---|---|
| Gemini 2.5 Pro | openrouter/google/gemini-2.5-pro |
Large context |
| Gemini 2.5 Flash | openrouter/google/gemini-2.5-flash |
Fast, cost-effective |
| GPT-4o | openrouter/openai/gpt-4o |
General purpose |
| o3 | openrouter/openai/o3 |
Complex reasoning |
| Claude Sonnet 4 | openrouter/anthropic/claude-sonnet-4 |
Balanced |
| Model | Name | Required Env Var |
|---|---|---|
| Gemini 2.5 Pro | google/gemini-2.5-pro |
GEMINI_API_KEY |
| Gemini 2.5 Flash | google/gemini-2.5-flash |
GEMINI_API_KEY |
| GPT-4o | openai/gpt-4o |
OPENAI_API_KEY |
| o3 | openai/o3 |
OPENAI_API_KEY |
| Claude Sonnet 4 | anthropic/claude-sonnet-4 |
ANTHROPIC_API_KEY |
- Interactive mode: GUI window, human-in-the-loop
- Headless mode: Autonomous execution with timeout
- Context passing: Automatically pulls from Claude Code conversation
- Session persistence: Resume or continue past sidecars
- Conflict detection: Warns when files change during async execution
- Drift awareness: Indicates when context may be stale
See SKILL.md for complete usage instructions.
MIT