Optimize context flow with modular plans (95% fewer tokens), parallel workflows (81% cost reduction), and zero-transformation architecture. Contextune your workflows for peak performance!
Claude Code has 80+ plugins with hundreds of slash commands. You can't remember them all.
Before Contextune:
You: "I need to run tests"
Claude: "Sure! Running tests..."
[30 seconds later, writes custom test script]
After Contextune:
You: "I need to run tests"
Contextune: π― Auto-executing /sc:test (85% confidence, keyword match, 0.02ms)
Claude: [Executes /sc:test automatically]
Complete context optimization with session duration tracking, usage monitoring, and smart tool routing
Contextune v0.9.0 introduces comprehensive context engineering features that maximize session duration and minimize costs.
Measure context preservation effectiveness:
# View session metrics
./scripts/view_session_metrics.sh
# Example output:
Session: session_1730000000
Started: 2025-10-26 21:00:00
First Compact: 2025-10-26 21:28:00
Duration: 28.0 minutes
Status: β
Good context preservationThresholds:
β οΈ Short (<10 min): Needs optimization- β Good (10-30 min): Healthy usage
- π― Excellent (30+ min): Excellent preservation
Automatic cost optimization based on quota consumption:
- At 90% weekly usage: Auto-switch to Haiku (87% savings)
- Parallel task limits: 2 concurrent at high usage
- Three-tier fallback: Headless β Estimation β Manual paste
Track usage:
/usage # Claude Code's usage command
/contextune:usage # Paste output to track in ContextuneIntelligent delegation of expensive operations:
- Read >1000 lines β Delegate to Haiku
- Complex Bash β Delegate to Haiku
- Fast operations β Keep on Sonnet
Cost savings: 77-87% per delegated operation Context benefit: 3x longer sessions before compaction
81% cost reduction + 2x speedup with three-tier intelligence
Contextune v0.3.0 introduces cost-optimized Haiku agents that dramatically reduce parallel workflow costs while improving performance.
Before (All Sonnet):
5 parallel tasks: $1.40 per workflow
1,200 workflows/year: $1,680/year
After (Haiku Agents):
5 parallel tasks: $0.27 per workflow (81% cheaper!)
1,200 workflows/year: $328/year
Annual savings: $1,356 π°
- Response time: 2x faster (Haiku 1-2s vs Sonnet 3-5s)
- Quality: Identical for execution tasks
- Scalability: Same 200K context window
parallel-task-executor - Feature implementation ($0.04 vs $0.27)
Autonomous development task execution:
- Creates GitHub issue and worktree
- Implements features
- Runs tests
- Pushes code and reports
Cost per task: $0.04 (85% savings!)
worktree-manager - Git worktree lifecycle ($0.008 vs $0.06)
Expert worktree management:
- Create/remove worktrees
- Diagnose lock file issues
- Bulk cleanup operations
- Health checks
Cost per operation: $0.008 (87% savings!)
issue-orchestrator - GitHub operations ($0.01 vs $0.08)
GitHub issue management:
- Create/update/close issues
- Label management
- Link to PRs
- Bulk operations
Cost per operation: $0.01 (87% savings!)
test-runner - Autonomous testing ($0.02 vs $0.15)
Multi-language test execution:
- Run tests (Python, JS, Rust, Go)
- Generate reports
- Create issues for failures
- Track coverage
Cost per run: $0.02 (87% savings!)
performance-analyzer - Workflow optimization ($0.015 vs $0.12)
Performance and cost analysis:
- Benchmark workflows
- Identify bottlenecks
- Calculate ROI
- Generate reports
Cost per analysis: $0.015 (87% savings!)
Tier 1: Skills (Sonnet) β Guidance & expertise (20% of work)
Tier 2: Orchestration (Sonnet) β Planning & coordination
Tier 3: Execution (Haiku) β Task execution (80% of work)
Result: 81% cost reduction, 2x performance, same quality!
Learn more:
Contextune now includes autonomous expert guidance through Skills. No commands to memorize - just ask questions naturally!
π parallel-development-expert
You: "How can I work on multiple features faster?"
Claude: *Skill activates automatically*
"Let me analyze your project...
Found 3 independent tasks!
Sequential: 8 hours β Parallel: 3 hours (62% faster!)
Say 'work on them in parallel' and I'll handle everything!"
π intent-recognition
You: "What can Contextune do?"
Claude: "Contextune makes Claude Code more natural!
π― Capabilities:
1. Parallel Development (30-70% faster)
2. Smart Intent Detection (zero commands to learn)
3. Expert Troubleshooting (autonomous help)
Try: 'work on auth and dashboard in parallel'"
π§ git-worktree-master
You: "Can't remove worktree, says locked"
Claude: "Diagnosing... Found lock file from interrupted operation.
Safe fix: Remove lock + worktree
Risk: None (keeps your branch)
Proceed? β
"
β‘ performance-optimizer
You: "My parallel workflow seems slow"
Claude: "Benchmarking...
Bottleneck: Sequential setup (107s overhead)
Fix: Parallel setup pattern
Impact: 2.3 min faster (23% improvement)
Optimize now?"
Learn more: Skills Documentation
- Keyword Matching (0.02ms) - 60% of queries
- Model2Vec Embeddings (0.2ms) - 30% of queries
- Semantic Router (50ms) - 10% of queries
Uses Claude Code headless mode with Haiku to provide intelligent command suggestions:
- Fast analysis: 1-2 seconds for comprehensive prompt evaluation
- Smart alternatives: Suggests better commands when initial match isn't optimal
- No API key needed: Uses your existing Claude Code authentication
- Context-aware: Understands your workflow and suggests command sequences
Example:
You: "can you help me research the best React state libraries"
Detected: /ctx:help (100% via fuzzy, 0.64ms)
π‘ Better alternatives:
β’ /ctx:research - get fast answers using 3 parallel agents
β’ /ctx:plan - create parallel development plans
π¬ Haiku suggests: Use '/ctx:research' to quickly investigate React
state libraries in parallel (2 min, ~$0.07). If you want a structured
development plan afterward, follow with '/ctx:plan'.
Understands natural variations and automatically executes the detected command:
- "analyze my code" β Auto-executes
/sc:analyze - "review the codebase" β Auto-executes
/sc:analyze - "check code quality" β Auto-executes
/sc:analyze - "audit for issues" β Auto-executes
/sc:analyze - "work on these in parallel" β Auto-executes
/ctx:execute
- P95 latency: <2ms (keyword path)
- Zero context overhead
- Lazy model loading
- Automatic caching
- Works out of the box
- Auto-discovers all installed plugins
- No API keys required (keyword + Model2Vec)
- Optional: Semantic Router for complex queries
Option 1: From Marketplace (Recommended)
# Add Contextune marketplace
/plugin marketplace add Shakes-tzd/contextune
# Install plugin
/plugin install contextuneOption 2: Direct from GitHub
# Install directly
/plugin install Shakes-tzd/contextune
# Or specify version
/plugin install Shakes-tzd/contextune@0.1.0Option 3: Local Development
# Clone repository
git clone https://github.com/Shakes-tzd/contextune
cd contextune
# Install locally
/plugin install @localNEW in v0.5.0: Run the configuration command for persistent visibility:
/ctx:configureThis will:
- β
Add Contextune section to
~/.claude/CLAUDE.md(~150 tokens, loaded at session start) - β Add Contextune commands to your status bar (zero context, always visible)
- β Validate plugin settings and skills
- β Create backups before any changes
Benefits:
- Always visible: See
/research | /parallel:plan | /parallel:executein status bar - Session awareness: Claude remembers Contextune at every session start
- Safe: Creates backups, asks permission, provides rollback instructions
Without configuration: Contextune still works via intent detection, but you won't see visual reminders.
Just type what you want in natural language:
# Instead of memorizing:
/sc:analyze --comprehensive
# Just type:
"can you analyze my code for issues?"
# Contextune auto-executes:
π― Auto-executing /sc:analyze (85% confidence, keyword match, 0.02ms)
Contextune detects these commands out of the box:
| Natural Language | Command | Confidence |
|---|---|---|
| "analyze the code" | /sc:analyze |
85% |
| "run tests" | /sc:test |
85% |
| "fix this bug" | /sc:troubleshoot |
85% |
| "implement feature" | /sc:implement |
85% |
| "explain this code" | /sc:explain |
85% |
| "optimize performance" | /sc:improve |
85% |
| "design architecture" | /sc:design |
85% |
| "commit changes" | /sc:git |
85% |
Expandable: Contextune auto-discovers commands from all your installed plugins!
Contextune includes a powerful parallel development system that lets Claude work on multiple independent tasks simultaneously using git worktrees.
- π Automatic Subagent Spawning - Claude spawns multiple agents to work in parallel
- πΏ Git Worktrees - Isolated working directories for each task
- π GitHub Integration - Automatic issue creation and tracking
- π― Zero Manual Coordination - Claude manages everything automatically
| Natural Language | Command | What It Does |
|---|---|---|
| "plan parallel development" | /ctx:plan |
Document development plan for parallel execution |
| "work on these in parallel" | /ctx:execute |
Execute plan in parallel using git worktrees |
| "check parallel status" | /ctx:status |
Monitor progress across all parallel tasks |
| "cleanup parallel worktrees" | /ctx:cleanup |
Clean up completed worktrees and branches |
You: "I need to implement authentication, dashboard, and analytics"
Claude: "π These tasks are independent. Would you like to work on them in parallel?"
You: "yes, parallelize this work"
Contextune: π― /ctx:execute detected (92% confidence)
Claude:
"β
Created plan: .parallel/plans/PLAN-20251014.md
β
Created 3 GitHub issues
β
Created 3 git worktrees
π Spawning 3 parallel subagents...
Agent 1: Working on authentication (worktrees/task-123)
Agent 2: Working on dashboard (worktrees/task-124)
Agent 3: Working on analytics (worktrees/task-125)
All tasks complete in 2 hours (vs 4.5h sequential - 56% faster!)"
- Sequential: 4.5 hours (sum of all tasks)
- Parallel: 2 hours (longest task duration)
- Speed Up: ~57% faster
Requirements:
- GitHub CLI (
gh) installed and authenticated - Git remote configured
- Clean working tree
User prompt: "analyze my code please"
β
UserPromptSubmit Hook
β
βββββββββββββββββββββββ
β 3-Tier Cascade β
βββββββββββββββββββββββ€
β 1. Keyword (0.02ms) β β 60% coverage
β 2. Model2Vec (0.2ms)β β 30% coverage
β 3. Semantic (50ms) β β 10% coverage
βββββββββββββββββββββββ
β
Command: /sc:analyze (85%)
β
Hook modifies prompt to "/sc:analyze"
β
Claude Code auto-executes the command
- 85%+: Keyword match (high precision, instant)
- 70-85%: Semantic match (good accuracy, fast)
- 50-70%: Fallback match (requires confirmation)
- <50%: No suggestion (pass through)
NEW in v0.5.4: Real-time detection display in your status line!
Contextune now writes detection data that can be displayed in Claude Code's status line, giving you instant visual feedback without consuming context tokens.
Quick Setup (2 minutes):
-
Find the statusline script path:
echo "$HOME/.claude/plugins/contextune/statusline.sh"
-
Add to your Claude Code statusline config (
~/.claude/settings.json):{ "statusline": { "right": [ {"type": "command", "command": "/Users/yourname/.claude/plugins/contextune/statusline.sh"} ] } } -
Or use the automated setup:
/ctx:configure
This command guides you through the setup process.
What you'll see:
π― /sc:analyze (85% via keyword)- Command detectedπ― Contextune: Ready- No active detection- Detection updates in real-time as you type
Tip: Run /ctx:configure for guided setup with automatic path detection
How it works:
UserPromptSubmit Hook
β
Detects intent (keyword/model2vec/semantic)
β
Writes to .contextune/last_detection
β
statusline.sh reads file
β
Status line displays: π― /sc:analyze (85% via keyword)
Benefits:
- β Zero context overhead (file-based, not in conversation)
- β Real-time visibility of what Contextune detected
- β See detection method and confidence at a glance
- β Works alongside other status line modules
Requirements:
- Bash shell
jq(optional, for pretty formatting)
- Python 3.10+
- UV package manager
- Claude Code
# Clone repository
git clone https://github.com/yourusername/contextune
cd contextune
# Install dependencies
uv sync
# Run tests
uv run pytest
# Test matchers individually
uv run lib/keyword_matcher.py
uv run lib/model2vec_matcher.py
uv run lib/semantic_router_matcher.py
# Test hook
echo '{"prompt":"analyze my code"}' | uv run hooks/user_prompt_submit.py# Serve docs locally
uv run mkdocs serve
# Visit http://localhost:8000
# Build docs
uv run mkdocs build
# Deploy to GitHub Pages
uv run mkdocs gh-deploy# Format code
uv run ruff format .
# Lint
uv run ruff check --fix .
# Type check
uv run mypy lib/
# Run all checks
uv run pytest && uv run ruff check . && uv run mypy lib/Contextune works out of the box with zero configuration!
Edit ~/.claude/plugins/contextune/data/user_patterns.json:
{
"enabled": true,
"confidence_threshold": 0.7,
"tiers": {
"keyword": true,
"model2vec": true,
"semantic_router": false
},
"custom_mappings": {
"make it pretty": "/sc:improve",
"ship it": "/sc:git"
}
}# Optional: For Semantic Router (Tier 3)
export COHERE_API_KEY="your-key"
# Or
export OPENAI_API_KEY="your-key"Benchmarked on M1 MacBook Pro:
| Tier | Latency (P95) | Coverage | Dependencies |
|---|---|---|---|
| Keyword | 0.02ms | 60% | None |
| Model2Vec | 0.2ms | 30% | model2vec (8MB) |
| Semantic Router | 50ms | 10% | API key |
Total hook overhead: <2ms for 90% of queries
- 3-tier detection cascade
- Keyword matching
- Model2Vec embeddings
- Semantic Router integration
- Hook implementation
- Basic command mappings
-
/ctx:intentscommand -
/ctx:statscommand - Parallel development workflow
-
/ctx:plancommand -
/ctx:executecommand -
/ctx:statuscommand -
/ctx:cleanupcommand - Status line integration (v0.5.4)
- Auto-discovery of all plugin commands
- Learning mode (capture corrections)
- Custom pattern editor
- Multi-command suggestions
- Context-aware ranking
- Command chaining detection
- Team pattern sharing
- VS Code extension
We love contributions! Here's how to help:
Open an issue with:
- Clear description of the problem
- Steps to reproduce
- Expected vs actual behavior
- Your environment (OS, Claude Code version, plugin version)
Open an issue with:
- Use case description
- Proposed solution
- Why it would help others
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature - Make changes and add tests
- Ensure tests pass:
uv run pytest - Format code:
uv run ruff format . - Commit changes:
git commit -m 'feat: add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open Pull Request
Development guidelines:
- Follow existing code style
- Add tests for new features
- Update documentation
- Use conventional commits
- Ensure all checks pass
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=lib --cov-report=html
# Run specific test file
uv run pytest tests/test_keyword.py
# Run with verbose output
uv run pytest -v
# Test individual matchers (built-in tests)
uv run lib/keyword_matcher.py
uv run lib/model2vec_matcher.py
uv run lib/semantic_router_matcher.pyNo! The hook adds <2ms latency for 90% of queries. You won't notice it.
Keyword and Model2Vec tiers work completely offline (90% coverage). Semantic Router tier requires an API key but is optional.
Yes! Edit data/user_patterns.json to add your own mappings.
Yes! Contextune auto-discovers commands from all installed plugins.
Everything runs locally except Semantic Router (optional). No data is collected.
- Check confidence threshold in config
- Try more specific language
- Add custom mappings for your phrases
# Install dependencies
uv syncThe model downloads automatically on first use (~8MB).
# Set API key
export COHERE_API_KEY="your-key"
# Or
export OPENAI_API_KEY="your-key"Or disable in config: "semantic_router": false
MIT License - see LICENSE file for details.
- Built with Model2Vec by Minish Lab
- Uses Semantic Router by Aurelio Labs
- Inspired by Claude Code's plugin ecosystem
- Special thanks to all contributors
- Documentation: https://yourusername.github.io/contextune/
- GitHub: https://github.com/yourusername/contextune
- Issues: https://github.com/yourusername/contextune/issues
- Discussions: https://github.com/yourusername/contextune/discussions
- Claude Code Docs: https://docs.claude.com/en/docs/claude-code/plugins
- Website: https://contextune.com (coming soon)
- π Read the docs
- π¬ Join discussions
- π Report bugs
- β Star the repo
Contextune: The command translator Claude Code needs.
Made with β€οΈ by developers who forgot too many slash commands