Skip to content

AI agent that plugs into any codebase — scans your stack, generates specs through AI interviews, and runs autonomous coding loops. Works with Claude Code, Codex, or any CLI-based coding agent.

License

Notifications You must be signed in to change notification settings

federiconeri/wiggum-cli

Repository files navigation

WIGGUM

Plug into any codebase. Generate specs. Ship features while you sleep.

npm downloads CI stars license node

Quick Start · How It Works · Website · Blog · Pricing · Issues

wiggum-demo1-gh-compressed.mp4


What is Wiggum?

Wiggum is an AI agent that plugs into any codebase and makes it ready for autonomous feature development — no configuration, no boilerplate.

It works in two phases. First, Wiggum itself is the agent: it scans your project, detects your stack, and runs an AI-guided interview to produce detailed specs, prompts, and scripts — all tailored to your codebase. Then it delegates the actual coding to Claude Code or any CLI-based coding agent, running an autonomous implement → test → fix loop until the feature ships.

Plug & play. Point it at a repo. It figures out the rest.

         Wiggum (agent)                    Coding Agent
  ┌────────────────────────────┐    ┌────────────────────┐
  │                            │    │                    │
  │  Scan ──▶ Interview ──▶ Spec ──▶  Run loops           │
  │  detect      AI-guided   .ralph/   implement         │
  │  80+ tech    questions   specs     test + fix        │
  │  plug&play   prompts     guides    until done        │
  │                            │    │                    │
  └────────────────────────────┘    └────────────────────┘
       runs in your terminal          Claude Code / any agent

🚀 Quick Start

npm install -g wiggum-cli

Then, in your project:

wiggum init                  # Scan project, configure AI provider
wiggum new user-auth         # AI interview → feature spec
wiggum run user-auth         # Autonomous coding loop

Or skip the global install:

npx wiggum-cli init

⚡ Features

🔍 Smart Detection — Auto-detects 80+ technologies: frameworks, databases, ORMs, testing tools, deployment targets, MCP servers, and more.

🎙️ AI-Guided Interviews — Generates detailed, project-aware feature specs through a structured 4-phase interview. No more blank-page problem.

🔁 Autonomous Coding Loops — Hands specs to Claude Code (or any agent) and runs implement → test → fix cycles with git worktree isolation.

Spec Autocomplete — AI pre-fills spec names from your codebase context when running /run.

📥 Action Inbox — Review AI decisions inline without breaking your flow. The loop pauses, you approve or redirect, it continues.

📊 Run Summaries — See exactly what changed and why after each loop completes, with activity feed and diff stats.

📋 Tailored Prompts — Generates prompts, guides, and scripts specific to your stack. Not generic templates — actual context about your project.

🔌 BYOK — Bring your own API keys. Works with Anthropic, OpenAI, or OpenRouter. Keys stay local, never leave your machine.

🖥️ Interactive TUI — Full terminal interface with persistent session state. No flags to remember.


🎯 How It Works

1. Scan

wiggum init

Wiggum reads your package.json, config files, source tree, and directory structure. A multi-agent AI system then analyzes the results:

  1. Planning Orchestrator — creates an analysis plan based on detected stack
  2. Parallel Workers — Context Enricher explores code while Tech Researchers gather best practices
  3. Synthesis — merges results, detects relevant MCP servers
  4. Evaluator-Optimizer — QA loop that validates and refines the output

Output: a .ralph/ directory with configuration, prompts, guides, and scripts — all tuned to your project.

2. Spec

wiggum new payment-flow

An AI-guided interview walks you through:

Phase What happens
Context Share reference URLs, docs, or files
Goals Describe what you want to build
Interview AI asks 3–5 clarifying questions
Generation Produces a detailed feature spec in .ralph/specs/

3. Loop

wiggum run payment-flow

Wiggum hands the spec + prompts + project context to your coding agent and runs an autonomous loop:

implement → run tests → fix failures → repeat

Supports git worktree isolation (--worktree) for running multiple features in parallel.


🖥️ Interactive Mode

Running wiggum with no arguments opens the TUI — the recommended way to use Wiggum:

$ wiggum
Command Alias Description
/init /i Scan project, configure AI provider
/new <feature> /n AI interview → feature spec
/run <feature> /r Run autonomous coding loop
/monitor <feature> /m Monitor a running feature
/sync /s Re-scan project, update context
/help /h Show commands
/exit /q Exit

📁 Generated Files

.ralph/
├── ralph.config.cjs          # Stack detection results + loop config
├── prompts/
│   ├── PROMPT.md             # Implementation prompt
│   ├── PROMPT_feature.md     # Feature planning
│   ├── PROMPT_e2e.md         # E2E testing
│   ├── PROMPT_verify.md      # Verification
│   ├── PROMPT_review_manual.md  # PR review (manual - stop at PR)
│   ├── PROMPT_review_auto.md    # PR review (auto - review, no merge)
│   └── PROMPT_review_merge.md   # PR review (merge - review + auto-merge)
├── guides/
│   ├── AGENTS.md             # Agent instructions (CLAUDE.md)
│   ├── FRONTEND.md           # Frontend patterns
│   ├── SECURITY.md           # Security guidelines
│   └── PERFORMANCE.md        # Performance patterns
├── scripts/
│   └── feature-loop.sh       # Main loop script
├── specs/
│   └── _example.md           # Example spec template
└── LEARNINGS.md              # Accumulated project learnings

🔧 CLI Reference

wiggum init [options]

Scan the project, detect the tech stack, generate configuration.

Flag Description
--provider <name> AI provider: anthropic, openai, openrouter (default: anthropic)
-i, --interactive Stay in interactive mode after init
-y, --yes Accept defaults, skip confirmations
wiggum new <feature> [options]

Create a feature specification via AI-powered interview.

Flag Description
--ai Use AI interview (default in TUI mode)
--provider <name> AI provider for spec generation
--model <model> Model to use
-e, --edit Open in editor after creation
-f, --force Overwrite existing spec
wiggum run <feature> [options]

Run the autonomous development loop.

Flag Description
--worktree Git worktree isolation (parallel features)
--resume Resume an interrupted loop
--model <model> Claude model (opus, sonnet)
--max-iterations <n> Max iterations (default: 10)
--max-e2e-attempts <n> Max E2E retries (default: 5)
--review-mode <mode> manual (stop at PR), auto (review, no merge), or merge (review + merge). Default: manual
wiggum monitor <feature> [options]

Track feature development progress in real-time.

Flag Description
--interval <seconds> Refresh interval (default: 5)
--bash Use bash monitor script

🔌 AI Providers

Wiggum requires an API key from one of these providers:

Provider Environment Variable
Anthropic ANTHROPIC_API_KEY
OpenAI OPENAI_API_KEY
OpenRouter OPENROUTER_API_KEY

Optional services for deeper analysis:

Service Variable Purpose
Tavily TAVILY_API_KEY Web search for current best practices
Context7 CONTEXT7_API_KEY Up-to-date documentation lookup

Keys are stored in .ralph/.env.local and never leave your machine.


🔍 Detection Capabilities (80+ technologies)

Category Technologies
Frameworks Next.js (App/Pages Router), React, Vue, Nuxt, Svelte, SvelteKit, Remix, Astro
Package Managers npm, yarn, pnpm, bun
Testing Jest, Vitest, Playwright, Cypress
Styling Tailwind CSS, CSS Modules, Styled Components, Emotion, Sass
Databases PostgreSQL, MySQL, SQLite, MongoDB, Redis
ORMs Prisma, Drizzle, TypeORM, Mongoose, Kysely
APIs REST, GraphQL, tRPC, OpenAPI
State Zustand, Jotai, Redux, Pinia, Recoil, MobX, Valtio
UI Libraries shadcn/ui, Radix, Material UI, Chakra UI, Ant Design, Headless UI
Auth NextAuth.js, Clerk, Auth0, Supabase Auth, Lucia, Better Auth
Analytics PostHog, Mixpanel, Amplitude, Google Analytics, Plausible
Payments Stripe, Paddle, LemonSqueezy
Email Resend, SendGrid, Postmark, Mailgun
Deployment Vercel, Netlify, Railway, Fly.io, Docker, AWS
Monorepos Turborepo, Nx, Lerna, pnpm workspaces
MCP Detects MCP server/client configs, recommends servers based on stack

📋 Requirements

  • Node.js >= 18.0.0
  • Git (for worktree features)
  • An AI provider API key (Anthropic, OpenAI, or OpenRouter)
  • Claude Code or another coding agent (for wiggum run)

🤝 Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.

git clone https://github.com/federiconeri/wiggum-cli.git
cd wiggum-cli
npm install
npm run build
npm test

📖 Learn More


📄 License

MIT + Commons Clause — see LICENSE.

You can use, modify, and distribute Wiggum freely. You may not sell the software or a service whose value derives substantially from Wiggum's functionality.


Built on the Ralph loop technique by Geoffrey Huntley

About

AI agent that plugs into any codebase — scans your stack, generates specs through AI interviews, and runs autonomous coding loops. Works with Claude Code, Codex, or any CLI-based coding agent.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors