AI-powered codebase mapping with parallel subagents. Produces architecture documentation, health analysis, and actionable recommendations.
The problem: AI agents can't read large codebases without hitting context limits or burning tokens re-reading files every session.
The solution: Recon splits your codebase across parallel subagents, each analyzing a portion simultaneously. The results merge into a single persistent map.
The payoff:
- Persistent map. Agents reference the map instead of re-reading source files every session.
- Incremental updates. Re-runs detect changes and only re-analyze modified files.
- Works at any scale. 10 files or 10,000, the approach is the same.
- Actionable output. Not just documentation, but health signals and priorities.
One /recon command gives your AI agents permanent context about your codebase.
| Platform | Status | Installation |
|---|---|---|
| Claude Code | ✅ Stable | Get Started |
| Cursor | Get Started | |
| Codex | 🔜 Coming Soon | — |
Install:
/plugin marketplace add EfrainTorres/recon
/plugin install recon
Use:
/recon
Or say "recon my project" and it triggers automatically.
For a full re-scan: /recon --force
Requirements: tiktoken (auto-installed with uv run, or pip install tiktoken)
📖 Full Claude Code documentation
Install:
# Copy agents to your project
mkdir -p .cursor/agents
curl -o .cursor/agents/recon.md https://raw.githubusercontent.com/EfrainTorres/recon/main/cursor/agents/recon.md
curl -o .cursor/agents/recon-analyzer.md https://raw.githubusercontent.com/EfrainTorres/recon/main/cursor/agents/recon-analyzer.md
# Copy scanner
mkdir -p scripts
curl -o scripts/scan-codebase.py https://raw.githubusercontent.com/EfrainTorres/recon/main/plugins/recon/skills/recon/scripts/scan-codebase.py
chmod +x scripts/scan-codebase.pyUse:
/recon
Or say "recon my project".
Requirements: Cursor 2.4+, Python 3.9+, tiktoken
⚠️ Known Issue: Cursor 2.4.x/2.5 has a bug preventing subagent spawning. Parallel analysis won't work until Cursor releases a fix. Use Claude Code for full functionality.
Status: Waiting on OpenAI
Native subagent support in Codex is still under development and hasn't been released yet. A community PR implementing multi-agent orchestration was closed in Oct 2025 as OpenAI aligns contributions with internal roadmaps.
Current workaround requires external orchestration via the Agents SDK + MCP, which adds setup complexity that defeats the point of a simple /recon command.
We'll add Codex support once native subagents land. Track progress: #2604
Recon orchestrates multiple subagents to analyze your codebase in parallel, then synthesizes their findings into:
docs/RECON_REPORT.mdwith comprehensive codebase documentation:- Architecture map with file purposes, dependencies, data flows
- Entrypoints (where execution begins)
- Config surface (all configuration files by category)
- Environment surface and env var usage (v2.1)
- API surface with HTTP endpoints, CLI commands (v2.1)
- Test coverage with colocated test detection (v2.1)
- Dependency graph: high impact files, circular dependencies, orphan candidates (v2.2)
- Health summary: hotspots, staleness, duplication, complexity
- Suggested first actions (top 5 priorities)
- Updates
CLAUDE.mdwith a summary pointing to the map
- Scan: Runs v2 scanner for file tree, token counts, git stats, entrypoints, duplicates
- Plan: Splits work across subagents based on token budgets (~150k each)
- Analyze: Spawns subagents in parallel with enhanced observation prompts
- Synthesize: Combines subagent reports + scanner metadata into documentation
Dependency Graph Intelligence (v2.2):
- High Impact Files: files with 5+ dependents flagged for careful changes
- Circular Dependencies: import cycles detected and reported with cycle paths
- Orphan Candidates: unused files identified (cross-referenced with entrypoints, tests, scripts)
- Structured JSON extraction from subagents enables cross-module visibility
Scanner Intelligence:
- Git-powered analysis: churn hotspots, staleness detection, co-change coupling
- Entrypoint detection (package.json, pyproject.toml, Cargo.toml, Dockerfile)
- Config surface listing by category
- Exact duplicate detection via content hashing
- Generated code detection (excluded from health signals)
- TODO/FIXME counting and distribution
Enhanced Health Observations (v2.1):
- Environment Surface: all environment variables and config dependencies
- API Surface: HTTP endpoints, CLI commands, public exports
- Test Coverage: colocated test file detection
- Dependency Flow: import/export relationships between files
- Unused code candidates with evidence-based confidence levels
- Complexity issues with specific examples
- Inconsistency detection within modules
- "Skip if n/a" approach (no empty sections, no wasted tokens)
Security:
- Never outputs credential values (API keys, tokens, passwords)
- Reports key names only (e.g., "JWT_SECRET used in auth.ts")
Actionable Output:
- Health Summary section with prioritized findings
- Suggested First Actions (top 5 things to address)
- Knowledge risk identification (single-author critical files)
Heuristics with honesty beat precision with complexity.
- Language-agnostic: works on any language, any stack, any size
- Scanner measures, LLMs judge
- Output is curated prose, not data dumps
- Single markdown file = predictable token cost
Recon is based on Cartographer by @kingbootoshi. The v2 scanner and health intelligence features build on his original vision for AI-powered codebase mapping.
AGPL-3.0