Skip to content

Auto-generate specialized Claude Code subagents for your stack. Phase-adaptive code review. 10 frameworks, 13 templates. MIT licensed.

License

Notifications You must be signed in to change notification settings

SawanoLab/adaptive-claude-agents

Adaptive Claude Agents

Status: v1.1.0 - Production Ready 🎉

Auto-generate project-specific Claude Code subagents that adapt to your development phase. Perfect for beginners - Start vibe coding in 5 minutes!

License: MIT Version Platform Status PRs Welcome

日本語版 README | Documentation | Examples | Troubleshooting


What is this?

Adaptive Claude Agents automatically generates specialized Claude Code subagents for your project and adjusts code review rigor based on your development phase.

👶 Perfect for Beginners

Vibe Coding Made Easy:

  • ✅ Install in 30 seconds
  • ✅ Templates auto-generated for your project
  • ✅ Quick Start sections (50-100 lines) for immediate value
  • ✅ Expand for advanced patterns when ready
  • ✅ Subagents auto-activate (no manual delegation)

Learning Curve: 5 minutes to start, lifetime to master

Two Novel Features

1. 🔍 Auto-Detection & Subagent Generation

Analyzes your project and generates appropriate subagents:

Detected: Next.js 14 + TypeScript + Vitest
→ Generated: nextjs-tester

Detected: FastAPI + SQLAlchemy + pytest
→ Generated: api-developer, api-tester, sqlalchemy-specialist

Supported Frameworks (11): Next.js, React, Vue, FastAPI, Django, Flask, Vanilla PHP/Web, Python ML/CV, iOS Swift, Go, Flutter

2. 📊 Phase-Adaptive Review ⭐ Industry First

Automatically adjusts review standards based on development phase:

Phase Review Rigor Focus Example Feedback
Prototype Light (3/10) "Does it work?" "This works! Consider adding error handling later."
MVP Moderate (6/10) "Is it secure?" "Add input validation here to prevent SQL injection."
Production Strict (10/10) "Is it perfect?" "This needs comprehensive error handling, tests, and monitoring."

No other AI coding tool (GitHub Copilot, Cursor, etc.) has this.

3. 💰 Token Optimization (60-70% Reduction) ⭐ NEW in v1.2.0

Automatically reduce Claude API costs by 60-70% using 2025 best practices:

  • 🔑 Markdown Shared Memory: Subagents save detailed reports to .claude/reports/, returning only summaries (50-60% savings)
  • Context Compression: Pass file paths instead of contents (60-80% savings per delegation)
  • 📍 Just-in-Time Loading: Progressive 3-tier context loading (40-50% savings)
  • 🚨 Parallel Processing Limits: Prevent token spikes from excessive parallelization

Real-world example:

  • Before: 205,000 tokens/session → $0.80
  • After: 10,700 tokens/session → $0.05
  • Savings: 95% ($0.75 per session)

Quick start: 5-Minute Token Optimization Guide Deep dive: Complete Token Optimization Guide


🚀 Quick Start

Installation

# One-command install
curl -fsSL https://raw.githubusercontent.com/SawanoLab/adaptive-claude-agents/main/install.sh | bash

Or manual install:

git clone https://github.com/SawanoLab/adaptive-claude-agents.git
cd adaptive-claude-agents
./install.sh

Updating

Already installed? Follow these 2 steps to get the latest AGGRESSIVE mode features:

Step 1: Update the Global Tool

# macOS
cd "$HOME/Library/Application Support/Claude/skills/adaptive-claude-agents"
./update.sh

# Linux/WSL
cd ~/.config/claude/skills/adaptive-claude-agents
./update.sh

What's updated:

  • Latest detection logic and templates
  • Enhanced AGGRESSIVE policy configuration
  • Bug fixes and improvements

Step 2: Update Each Project (IMPORTANT!)

For every project where you previously ran analyze_project.py, choose one of the update modes:

# Navigate to your project
cd /path/to/your/project

# Option 1: Update only existing files (RECOMMENDED - preserves customizations)
python3 "$HOME/Library/Application Support/Claude/skills/adaptive-claude-agents/skills/project-analyzer/analyze_project.py" . --update-only --auto

# Option 2: Add new templates, preserve existing (with backup)
python3 "$HOME/Library/Application Support/Claude/skills/adaptive-claude-agents/skills/project-analyzer/analyze_project.py" . --merge --auto

# Option 3: Complete regeneration (discards customizations, creates backup)
python3 "$HOME/Library/Application Support/Claude/skills/adaptive-claude-agents/skills/project-analyzer/analyze_project.py" . --force --auto

Update Mode Differences:

  • --update-only: Update only existing .claude/agents/*.md, no new files added (safest)
  • --merge: Add new templates, preserve existing files (creates backup)
  • --force: Overwrite everything (creates backup, customizations lost)

This will:

  1. ✅ Regenerate .claude/agents/SUBAGENT_GUIDE.md with latest framework-specific workflows
  2. ✅ Add AGGRESSIVE policy to your project's CLAUDE.md (if not already present)
  3. ✅ Update all subagent templates with latest best practices
  4. ✅ Protect your customized subagents (when using --update-only or --merge)

Repeat Step 2 for all active projects to enable proactive subagent usage everywhere!

Usage

In Claude Code, simply ask:

> "Analyze my project and generate appropriate subagents"

Or use directly:

# macOS
python3 "$HOME/Library/Application Support/Claude/skills/adaptive-claude-agents/skills/project-analyzer/analyze_project.py" .

# Linux/WSL
python3 ~/.config/claude/skills/adaptive-claude-agents/skills/project-analyzer/analyze_project.py .

Full guide: Quick Start (5 minutes)


🎯 When to Use Subagents (Efficiency Guide)

Philosophy: If you installed this tool, you want proactive subagent usage. Trust the automation!

✅ MUST Use Subagents

Scenario Example Subagent Time Saved
3+ files with similar patterns "Apply blur fix to assessment.js, soap.js, nursing_plan.js" general-purpose 30-60 min
Codebase-wide searches "Find all uses of version_number" Explore ("very thorough") 60-90 min
E2E testing workflows "Test login → API call → DB validation" general-purpose + framework tester 45+ min
Parallel independent tasks "Update .gitignore + Refactor UI components" Multiple general-purpose 30+ min

❌ Skip Subagents

  • Single file small edits (< 10 lines)
  • Simple 1-2 file searches where location is known
  • Token-constrained environments (rare)

💡 Pro Tips

  1. Default to subagents for 3+ files - 20k token overhead is worth it
  2. Use Explore agent liberally - Better than manual grep/glob
  3. Parallelize independent tasks - Single message, multiple Task tool calls
  4. Trust auto-triggers - Keywords like "テスト", "test", "review" activate subagents

📊 Cost-Benefit Analysis

Task Type Direct Cost Subagent Cost Time Saved Recommendation
1 file edit 5k tokens 25k tokens 0 min ❌ Direct
3-4 files 15k tokens 35k tokens 30 min ✅ Subagent
5+ files 30k tokens 50k tokens 60 min ✅✅ Subagent
Codebase search 40k tokens 60k tokens 90 min ✅✅✅ Explore

Target: 20-30% subagent usage rate in complex multi-file projects.

See EXAMPLES.md for detailed walkthroughs.


📚 Documentation

Document Description
Quick Start 5-minute getting started guide
Subagent Update Guide How to safely update project subagents
Examples 5 real-world examples with full output
Troubleshooting Common issues and solutions
Contributing How to contribute templates
Changelog Version history

🎯 How It Works

1. Project Analysis

$ python3 skills/project-analyzer/detect_stack.py .

Detected: Next.js 14 + TypeScript
Confidence: 98%
Tools: Vitest, Testing Library, Tailwind CSS

2. Phase Detection

$ python3 skills/project-analyzer/detect_phase.py .

Phase: MVP
Confidence: 72%
Review Rigor: 6/10

Signals:
  • Basic tests: 100% ✅
  • CI config: 100% ✅
  • Moderate commits (20-100): 100% ✅
  • Multiple contributors (2-5): 100% ✅
  • Environment config: 100% ✅

3. Subagent Generation

Generated subagents:
  ✓ .claude/agents/nextjs-tester.md

Variables substituted:
  {{FRAMEWORK}} → Next.js
  {{LANGUAGE}} → TypeScript
  {{VERSION}} → 14.2.0

4. Adaptive Review

Review standards automatically adjust:

  • Prototype: Encourages rapid iteration, defers quality checks
  • MVP: Balances functionality and quality
  • Production: Enforces strict standards (80%+ test coverage, security audit, etc.)

🌟 What Makes It Unique

Compared to Other Tools

Feature Adaptive Claude Agents GitHub Copilot Cursor
Auto-detect tech stack ✅ 11 frameworks
Generate specialized agents ✅ 15 templates
Phase-adaptive review Industry first
Works across all projects ✅ Global Skills
Open source ✅ MIT

📦 Supported Frameworks

Framework Detection Confidence Templates Tested
Next.js 100% nextjs-tester
Vanilla PHP/Web 100% php-developer, playwright-tester, vanilla-js-developer, mysql-specialist
Python ML/CV 100% python-ml-developer, cv-specialist
Vue 90% (Next.js templates)
Go 85% go-developer, go-reviewer, concurrency-checker
Flutter 80% flutter-developer, widget-reviewer
FastAPI 80% api-developer, api-tester, sqlalchemy-specialist
React 80% (Next.js templates)
Django 80% (FastAPI templates)
iOS Swift 80% swift-developer
Flask 70% (FastAPI templates)

Total: 11/11 frameworks tested (100%), 15 specialized templates (~864KB with comprehensive troubleshooting)

Template Structure:

  • Quick Start section (50-100 lines) for beginners to get immediate value
  • Advanced patterns for deep expertise
  • Comprehensive troubleshooting guides

Legend: ✅ = Validated in Week 2 testing

Want to add your framework? See Template Request


🛠️ Development

Project Structure

adaptive-claude-agents/
├── skills/
│   ├── project-analyzer/     # Tech stack detection
│   └── adaptive-review/      # Phase detection
├── templates/                 # Subagent templates (13)
│   ├── nextjs/
│   ├── fastapi/
│   ├── vanilla-php-web/
│   ├── python-ml/
│   └── ios-swift/
├── docs/                      # User documentation
└── install.sh                 # Installation script

Tech Stack

  • Language: Python 3.9+
  • Detection: File-based + content analysis
  • Templates: Markdown with variable substitution
  • Integration: Claude Code Skills

🧪 Testing & Development

Running Tests

This project uses pytest with comprehensive test coverage:

# Install development dependencies
pip install -r requirements-dev.txt

# Run all tests
pytest

# Run specific test categories
pytest -m "not slow"              # Skip slow tests
pytest -m benchmark               # Run performance benchmarks only
pytest -m integration             # Run integration tests only

# Run with coverage report
pytest --cov=skills --cov-report=html

# Run tests in parallel (faster)
pytest -n auto

Test Structure

tests/
├── README.md              # Comprehensive testing docs
├── conftest.py            # Fixtures for 11 frameworks
├── test_detection.py      # Framework detection tests
├── test_generation.py     # Subagent generation tests
├── test_integration.py    # End-to-end workflow tests
└── test_performance.py    # Performance benchmarks

Quality Metrics

  • Coverage Target: 85%+ overall, 90%+ for detection logic
  • Performance Targets:
    • Detection: < 500ms per framework
    • Generation: < 2s full workflow
    • Memory: < 100MB peak usage

Code Quality Tools

# Format code
black skills/ tests/

# Lint code
ruff check skills/ tests/

# Type check
mypy skills/

See tests/README.md for comprehensive testing documentation.


🤝 Contributing

We welcome contributions! See CONTRIBUTING.md for:

  • Adding new framework templates
  • Improving detection accuracy
  • Enhancing documentation
  • Reporting bugs

Quick Links:


📄 License

MIT License - see LICENSE for details.

Attribution: If you use this project, a link back to this repository is appreciated.


🙏 Acknowledgments

Developed at SawanoLab, Aichi Institute of Technology.

Inspired by:

Special Thanks:

  • Anthropic for Claude Code and the Skills framework
  • Alpha testers from SawanoLab
  • All contributors to this project

📬 Contact


Made with ❤️ by SawanoLab

⭐ Star us on GitHub if you find this useful!

About

Auto-generate specialized Claude Code subagents for your stack. Phase-adaptive code review. 10 frameworks, 13 templates. MIT licensed.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •