Feature Radar helps your AI coding agent discover, track, and prioritize what to build next.
Whether it's creative ideation, ecosystem scanning, user feedback, or cross-project research β it captures ideas from any source, evaluates them objectively, and maintains a living knowledge base that compounds over time.
Works with any AI agent that supports SKILL.md.
Say "feature radar" and your agent analyzes your project β language, architecture, key feature areas β and builds a structured tracking system at .feature-radar/. From there, every feature goes through a lifecycle:
flowchart TD
subgraph Discovery
S1["scan"] --> OPP[opportunities/]
S2["ref"] --> REF[references/]
end
subgraph Evaluation
FR["feature-radar"] --> |Phase 1-3| CLASSIFY{Classify}
CLASSIFY --> |Open| OPP
CLASSIFY --> |Done/Rejected| ARC[archive/]
CLASSIFY --> |Pattern| SPEC[specs/]
CLASSIFY --> |External| REF
FR --> |Phase 5-6| RANK[Rank & Propose]
RANK --> BUILD["Enter plan mode"]
end
subgraph Completion
DONE["archive"] --> ARC
DONE --> |extract learnings| SPEC
DONE --> |derive opportunities| OPP
DONE --> |update references| REF
S3["learn"] --> SPEC
end
OPP --> FR
BUILD --> DONE
Archiving is not the end β it's a checkpoint. Every shipped feature produces learnings, reveals new gaps, and opens new directions. The archive checklist enforces this so institutional knowledge compounds instead of evaporating.
The skills trigger automatically β just say "what should we build next" or "this feature is done" and the right workflow kicks in.
Every skill follows the same execution model β deep understanding before action, structured checkpoints during execution, and verified completion:
flowchart TD
A[Trigger phrase received] --> B[Deep Read]
B --> B1[Read base.md thoroughly]
B1 --> B2[Scan existing files]
B2 --> B3[State understanding]
B3 --> C{Understanding<br/>confirmed?}
C -->|No| B
C -->|Yes| D[Behavioral Directives<br/>loaded]
D --> E[Execute Workflow Steps]
E --> F{Important<br/>output?}
F -->|Yes| G[Write file +<br/>annotation review]
F -->|No| H[Conversational<br/>confirm]
G --> I{User annotated?}
I -->|Yes| J[Address notes]
J --> K{Approved?}
I -->|No / approved| L[Continue]
K -->|No| J
K -->|Yes| L
H --> L
L --> M{More steps?}
M -->|Yes| E
M -->|No| N[Completion Summary]
You can steer any skill's output by annotating files directly:
- The skill writes a file (e.g.,
opportunities/07-streaming.md) - Open the file, add
> NOTE: your correctionanywhere - Tell the agent "address my notes"
- The agent reads all
> NOTE:lines, applies corrections, removes markers - Repeat until satisfied
This is the fastest way to inject domain knowledge the agent doesn't have β architecture constraints, naming conventions, strategic decisions.
With skillshare
skillshare install runkids/feature-radar --into feature-radarWith Vercel Skills CLI
npx skills add runkids/feature-radarCopy the skills to your agent's skill directory:
# Claude Code
cp -r skills/* ~/.claude/skills/
# Codex
cp -r skills/* ~/.codex/skills/Pick individual skills if you don't need all of them:
cp -r skills/feature-radar ~/.claude/skills/
cp -r skills/feature-radar-archive ~/.claude/skills/| Skill | Trigger | Output |
|---|---|---|
| feature-radar | "feature radar", "what should we build next" | Full 6-phase workflow β all directories + base.md |
| feature-radar:scan | "scan opportunities", "brainstorm ideas" | New entries β opportunities/ |
| feature-radar:archive | "archive feature", "this feature is done" | Move to archive/ + extraction checklist |
| feature-radar:learn | "extract learnings", "capture what we learned" | Patterns β specs/ |
| feature-radar:ref | "add reference", "interesting approach" | Observations β references/ |
The full workflow. Analyzes your project, creates .feature-radar/ with base.md (project dashboard), then runs 6 phases: scan, archive, organize, gap analysis, evaluate, propose. Ends by recommending what to build next. Starts with deep project analysis and confirms understanding before proceeding. Checkpoints after Phase 1, 3, and 5 let you steer mid-flow.
Discover new ideas β from creative brainstorming, user pain points, ecosystem evolution, technical possibilities, or cross-project research. Deduplicates against existing tracking and evaluates each candidate on 6 criteria including value uplift and innovation potential. Deeply reads existing tracking state to avoid duplicates. After creating files, offers annotation review so you can refine Impact/Effort/Position.
Archive a shipped, rejected, or covered feature. Then runs the mandatory extraction checklist: extract learnings β specs, derive new opportunities, update references, update trends. Does NOT skip steps. After creating the archive file, offers annotation review before running the extraction checklist.
Capture reusable patterns, architectural decisions, and pitfalls from completed work. Names files by the pattern, not the feature that produced it. Confirms each finding with you before writing to specs/.
Record external observations and inspiration β ecosystem trends, creative approaches from other projects, research findings, user feedback. Cites source URLs and dates, assesses implications, suggests new opportunities when unmet needs or innovation angles are found. Confirms impact assessment with you before writing.
On first run, feature-radar creates:
.feature-radar/
βββ base.md β Project dashboard: context, feature inventory, strategic overview
βββ archive/ β Shipped, rejected, or covered features
βββ opportunities/ β Open features ranked by impact and effort
βββ specs/ β Reusable patterns and architectural decisions
βββ references/ β External inspiration, observations, and ecosystem analysis
base.md is the project dashboard β generated by analyzing your codebase, updated incrementally:
- Project Context β language, architecture, key feature areas, core philosophy
- Feature Inventory β what's built, where the code lives, docs coverage gaps
- Tracking Summary β counts across all categories
- Classification Rules β how features move between categories
- Archive Extraction Checklist β the mandatory checks that make knowledge compound
- Compound knowledge β Every completed feature feeds back into the system
- Value-driven β Chase user value and innovation, not feature checklists
- Honest evaluation β Evaluate fit with YOUR architecture and users, not someone else's roadmap
- Signal over noise β 1 issue with no comments = weak signal; multiple independent asks = strong
- Evidence over assumptions β Rank by real demand and creative potential, not hypothetical value
Skills live in the skills/ directory. To contribute:
- Fork the repository
- Create a branch for your skill
- Add your skill under
skills/{skill-name}/SKILL.md - Submit a PR
MIT License β see LICENSE file for details.