Skip to content

feat: add bmad-prfaq skill as alternative analysis path#2157

Merged
bmadcode merged 3 commits intomainfrom
add-prfaq-skill
Mar 28, 2026
Merged

feat: add bmad-prfaq skill as alternative analysis path#2157
bmadcode merged 3 commits intomainfrom
add-prfaq-skill

Conversation

@bmadcode
Copy link
Copy Markdown
Collaborator

Summary

  • Adds the bmad-prfaq skill — Amazon's Working Backwards PRFAQ methodology as an alternative to the product brief for Phase 1 analysis
  • 5-stage coached workflow: Ignition → Press Release → Customer FAQ → Internal FAQ → Verdict, with headless mode support
  • Subagent architecture (artifact analyzer + web researcher) for context gathering without parent agent bloat
  • Produces PRFAQ document + distillate for downstream PRD consumption
  • Updates module-help.csv, docs, and workflow-map diagram to integrate the new skill

Key design decisions

  • Alternative, not replacement — PRFAQ sits alongside product brief in Phase 1; CSV descriptions guide users to choose based on their needs
  • Compaction-resilient — coaching persona re-anchored in each stage prompt; coaching notes captured as HTML comments in the output document to survive context loss
  • Context-efficient — explicit "do not read" guards prevent parent agent from processing artifacts that subagents handle; resume detection constrained to first 20 lines; subagent response budgets capped
  • Concept-type adaptive — non-commercial concepts (open-source, internal tools, community projects) get calibrated question framing throughout all stages

Test plan

  • All existing tests pass (205 installation tests, lint, format)
  • Run PRFAQ skill end-to-end in guided mode with a commercial product concept
  • Run PRFAQ skill end-to-end with a non-commercial concept to verify adaptation
  • Run PRFAQ skill in headless mode with structured input
  • Verify resume detection works across session boundaries
  • Verify subagent delegation (artifact analyzer + web researcher) fires correctly

🤖 Generated with Claude Code

Add Working Backwards PRFAQ challenge skill for stress-testing product
concepts through Amazon's PRFAQ methodology. Includes press release
drafting, customer FAQ, internal FAQ, and verdict stages with subagent
support for artifact scanning and web research.

- New bmad-prfaq skill with 5-stage interactive gauntlet and headless mode
- Subagents for artifact analysis and web research (graceful degradation)
- Research-grounded output directive for current market/competitive data
- Always produces distillate for downstream PRD consumption
- Fix manifest array syntax in both prfaq and product-brief manifests
- Drop number prefixes from reference files
- Update docs: getting-started, workflow-map, agents, skills reference
- Add analysis-phase explainer doc with comparison table and decision guide
- Update workflow-map-diagram.html with prfaq card
- Add -H and -A args to CSV for both skills
- Add unist-util-visit as devDependency (was imported but undeclared)
Add coaching persona re-anchors to all stage prompts so the behavioral
directive survives context compaction. Add do-not-read guards at resume
detection, headless mode, and input gathering to prevent parent agent
context bloat. Add Stage 1 coaching notes capture. Adapt template and
press release stage for non-commercial concept types. Cap subagent
response token budgets.
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 28, 2026

📝 Walkthrough

Walkthrough

This PR introduces a new PRFAQ (Working Backwards) workflow skill (bmad-prfaq) with comprehensive stage-based guidance, subagent specifications, and templates. Documentation is updated across reference and tutorial pages to include the PRFAQ option alongside product brief, with manifest updates defining capability dependencies and menu triggers.

Changes

Cohort / File(s) Summary
PRFAQ Skill Implementation
src/bmm-skills/1-analysis/bmad-prfaq/SKILL.md, src/bmm-skills/1-analysis/bmad-prfaq/bmad-manifest.json, src/bmm-skills/1-analysis/bmad-prfaq/assets/prfaq-template.md, src/bmm-skills/1-analysis/bmad-prfaq/agents/*, src/bmm-skills/1-analysis/bmad-prfaq/references/*
New complete PRFAQ skill with 5-stage workflow (Ignition, Press Release, Customer FAQ, Internal FAQ, Verdict). Includes manifest defining working-backwards capability with menu code WB, templated document structure, subagent specifications (artifact analyzer, web researcher) with strict JSON output schemas, and stage-specific coaching/execution guidance for both interactive and headless modes.
Reference Documentation Updates
docs/reference/agents.md, docs/reference/commands.md, docs/reference/workflow-map.md
Updated Analyst agent to include WB menu trigger for new PRFAQ skill. Added bmad-product-brief and bmad-prfaq entries to workflow skills reference. Updated Phase 1 workflow map with renamed skill (bmad-product-brief with clarity condition) and new bmad-prfaq workflow entry.
Tutorial & Explanation Content
docs/explanation/analysis-phase.md, docs/tutorials/getting-started.md
Added new Analysis Phase explanation documenting purpose, four optional tools (Brainstorming, Research, Product Brief, PRFAQ), decision matrix, and guidance linking to Phase 2. Updated getting-started tutorial to present both product brief and PRFAQ options with "when to use" context.
Analyst Agent & Product Brief Updates
src/bmm-skills/1-analysis/bmad-agent-analyst/SKILL.md, src/bmm-skills/1-analysis/bmad-product-brief/bmad-manifest.json
Added WB capability mapping to analyst persona. Fixed product-brief manifest ordering syntax (split comma-separated string into array).
Dependencies
package.json
Added unist-util-visit@^5.1.0 dev dependency.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: add bmad-prfaq skill as alternative analysis path' directly summarizes the main change: adding a new PRFAQ skill as an alternative analysis approach in Phase 1, which aligns with the changeset's primary objective.
Description check ✅ Passed The description provides a comprehensive summary of the PRFAQ skill addition, covering its 5-stage workflow, architecture, key design decisions, and test plan. It clearly relates to the documented changes across multiple files and explains the rationale behind the implementation.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch add-prfaq-skill

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
src/bmm-skills/1-analysis/bmad-prfaq/agents/artifact-analyzer.md (1)

39-59: Verify output schema aligns with distillate consumption.

The artifact analyzer outputs a JSON object with specific keys (key_insights, user_market_context, technical_context, ideas_and_decisions, raw_detail_worth_preserving). However, verdict.md specifies distillate content themes that don't directly map to these keys (e.g., "rejected framings," "requirements signals," "competitive intelligence").

Ensure the downstream distillate generation logic (in verdict.md or the parent workflow) can correctly consume and transform these JSON keys into the required distillate themes. Consider documenting the mapping explicitly or aligning the JSON keys with distillate theme names.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bmm-skills/1-analysis/bmad-prfaq/agents/artifact-analyzer.md` around
lines 39 - 59, The artifact analyzer's JSON output keys (key_insights,
user_market_context, technical_context, ideas_and_decisions,
raw_detail_worth_preserving) don't directly match the distillate themes
referenced in verdict.md (e.g., rejected framings, requirements signals,
competitive intelligence); update the pipeline by either (a) renaming or adding
keys in the artifact-analyzer output to match verdict.md theme names, or (b)
adding a clear, documented mapping function used by the distillate generator
that translates artifact-analyzer keys to verdict.md themes (document the
mapping in artifact-analyzer.md and reference it from verdict.md), and ensure
the mapping handles ideas_and_decisions -> {accepted|rejected|open} -> rejected
framings/decision rationale and maps
user_market_context/technical_context/raw_detail_worth_preserving into the
appropriate distillate buckets.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/bmm-skills/1-analysis/bmad-prfaq/assets/prfaq-template.md`:
- Line 6: Update the frontmatter line that currently reads stage:
"{current_stage}" to use an unquoted numeric template value stage:
{current_stage} so the YAML emits a numeric stage (matching press-release.md,
customer-faq.md, internal-faq.md, verdict.md) and so resume detection that reads
the stage field in SKILL.md works correctly.

In `@src/bmm-skills/1-analysis/bmad-prfaq/SKILL.md`:
- Line 24: Update the wording in SKILL.md so it does not reference a bare
"config.user.yaml" that the validator treats as an in-repo required path; either
change the reference to the explicit location
"{project-root}/_bmad/config.user.yaml" or rephrase the sentence to say "the
project's _bmad config files (e.g. config.yaml and config.user.yaml)" to avoid a
hard file-path that validate-file-refs.js will resolve as required.

---

Nitpick comments:
In `@src/bmm-skills/1-analysis/bmad-prfaq/agents/artifact-analyzer.md`:
- Around line 39-59: The artifact analyzer's JSON output keys (key_insights,
user_market_context, technical_context, ideas_and_decisions,
raw_detail_worth_preserving) don't directly match the distillate themes
referenced in verdict.md (e.g., rejected framings, requirements signals,
competitive intelligence); update the pipeline by either (a) renaming or adding
keys in the artifact-analyzer output to match verdict.md theme names, or (b)
adding a clear, documented mapping function used by the distillate generator
that translates artifact-analyzer keys to verdict.md themes (document the
mapping in artifact-analyzer.md and reference it from verdict.md), and ensure
the mapping handles ideas_and_decisions -> {accepted|rejected|open} -> rejected
framings/decision rationale and maps
user_market_context/technical_context/raw_detail_worth_preserving into the
appropriate distillate buckets.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2efbbfd9-4319-4f18-91e0-ea69d75c75b8

📥 Commits

Reviewing files that changed from the base of the PR and between fa909a8 and c43964d.

⛔ Files ignored due to path filters (2)
  • src/bmm-skills/module-help.csv is excluded by !**/*.csv
  • website/public/workflow-map-diagram.html is excluded by !website/**
📒 Files selected for processing (17)
  • docs/explanation/analysis-phase.md
  • docs/reference/agents.md
  • docs/reference/commands.md
  • docs/reference/workflow-map.md
  • docs/tutorials/getting-started.md
  • package.json
  • src/bmm-skills/1-analysis/bmad-agent-analyst/SKILL.md
  • src/bmm-skills/1-analysis/bmad-prfaq/SKILL.md
  • src/bmm-skills/1-analysis/bmad-prfaq/agents/artifact-analyzer.md
  • src/bmm-skills/1-analysis/bmad-prfaq/agents/web-researcher.md
  • src/bmm-skills/1-analysis/bmad-prfaq/assets/prfaq-template.md
  • src/bmm-skills/1-analysis/bmad-prfaq/bmad-manifest.json
  • src/bmm-skills/1-analysis/bmad-prfaq/references/customer-faq.md
  • src/bmm-skills/1-analysis/bmad-prfaq/references/internal-faq.md
  • src/bmm-skills/1-analysis/bmad-prfaq/references/press-release.md
  • src/bmm-skills/1-analysis/bmad-prfaq/references/verdict.md
  • src/bmm-skills/1-analysis/bmad-product-brief/bmad-manifest.json

status: "{status}"
created: "{timestamp}"
updated: "{timestamp}"
stage: "{current_stage}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify stage field format in PRFAQ reference files
rg -n "stage:" src/bmm-skills/1-analysis/bmad-prfaq/references/*.md

Repository: bmad-code-org/BMAD-METHOD

Length of output: 1368


🏁 Script executed:

#!/bin/bash
# Read the actual template file to confirm the issue
cat -n src/bmm-skills/1-analysis/bmad-prfaq/assets/prfaq-template.md | head -20

Repository: bmad-code-org/BMAD-METHOD

Length of output: 829


🏁 Script executed:

#!/bin/bash
# Read SKILL.md to check resume detection logic
cat -n src/bmm-skills/1-analysis/bmad-prfaq/SKILL.md

Repository: bmad-code-org/BMAD-METHOD

Length of output: 9496


🏁 Script executed:

#!/bin/bash
# Search for where {current_stage} is set or substituted
rg -n "current_stage" src/bmm-skills/1-analysis/bmad-prfaq/ -A 2

Repository: bmad-code-org/BMAD-METHOD

Length of output: 300


Fix frontmatter stage field to use unquoted numeric value.

The template sets stage: "{current_stage}" as a quoted string. However, all stage reference files (press-release.md, customer-faq.md, internal-faq.md, verdict.md) use unquoted numeric stage values (stage: 2, stage: 3, stage: 4, stage: 5). Additionally, SKILL.md resume detection explicitly reads the stage field to determine which stage to resume from.

Update the template to stage: {current_stage} (without quotes) to ensure the frontmatter value matches the reference files and maintains consistent YAML typing for the resume detection logic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bmm-skills/1-analysis/bmad-prfaq/assets/prfaq-template.md` at line 6,
Update the frontmatter line that currently reads stage: "{current_stage}" to use
an unquoted numeric template value stage: {current_stage} so the YAML emits a
numeric stage (matching press-release.md, customer-faq.md, internal-faq.md,
verdict.md) and so resume detection that reads the stage field in SKILL.md works
correctly.


## On Activation

Load available config from `{project-root}/_bmad/config.yaml` and `{project-root}/_bmad/config.user.yaml` (root level and `bmm` section). If config is missing, let the user know `bmad-builder-setup` can configure the module at any time. Use sensible defaults for anything not configured.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix broken config file reference causing CI failure

Line 24 references config.user.yaml, but validate-file-refs.js strict mode resolves it to a non-existent in-repo path (src/config.user.yaml) and fails the pipeline. Please update this line to either (a) point to an existing canonical repo path, or (b) rephrase to avoid a hard file-path reference that the validator treats as required.

Suggested wording adjustment
-Load available config from `{project-root}/_bmad/config.yaml` and `{project-root}/_bmad/config.user.yaml` (root level and `bmm` section). If config is missing, let the user know `bmad-builder-setup` can configure the module at any time. Use sensible defaults for anything not configured.
+Load available config from `{project-root}/_bmad/config.yaml` plus optional user override config in `_bmad/` (root level and `bmm` section). If config is missing, let the user know `bmad-builder-setup` can configure the module at any time. Use sensible defaults for anything not configured.
🧰 Tools
🪛 GitHub Actions: Quality & Validation

[error] 24-24: validate-file-refs.js (strict mode) found a broken reference: config.user.yaml → src/config.user.yaml. Target not found (config.user.yaml at line 24 in this file).

🪛 GitHub Check: validate

[warning] 24-24:
Broken reference: config.user.yaml → src/config.user.yaml

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/bmm-skills/1-analysis/bmad-prfaq/SKILL.md` at line 24, Update the wording
in SKILL.md so it does not reference a bare "config.user.yaml" that the
validator treats as an in-repo required path; either change the reference to the
explicit location "{project-root}/_bmad/config.user.yaml" or rephrase the
sentence to say "the project's _bmad config files (e.g. config.yaml and
config.user.yaml)" to avoid a hard file-path that validate-file-refs.js will
resolve as required.

@augmentcode
Copy link
Copy Markdown

augmentcode bot commented Mar 28, 2026

🤖 Augment PR Summary

Summary: Introduces a new Phase 1 (Analysis) alternative workflow, bmad-prfaq, implementing Amazon�s Working Backwards PRFAQ methodology alongside the existing product brief path.

Changes:

  • Added a new docs explainer page describing Analysis tools and when to use each.
  • Updated reference docs (agents, commands, workflow map, getting started) to surface PRFAQ as an Analysis option.
  • Added the bmad-prfaq skill with a 5-stage coached flow (Ignition � Press Release � Customer FAQ � Internal FAQ � Verdict) and headless mode support.
  • Added subagent prompt specs (artifact analyzer + web researcher) to gather internal docs + current web context without bloating the parent prompt.
  • Added a PRFAQ document template and stage reference files that guide iterative drafting and progressive document updates.
  • Fixed the product-brief manifest�s after array formatting.
  • Added unist-util-visit as a dependency (used by existing rehype/remark tooling).

🤖 Was this summary useful? React with 👍 or 👎

Copy link
Copy Markdown

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 4 suggestions posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.

"prettier": "^3.7.4",
"prettier-plugin-packagejson": "^2.5.19",
"sharp": "^0.33.5",
"unist-util-visit": "^5.1.0",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In package.json:100, unist-util-visit is added but package-lock.json in this branch doesn�t list it under the root packages[""] dependencies, which typically causes npm ci to fail due to lockfile/package.json mismatch (Rule: AGENTS.md).

Severity: high

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.


## On Activation

Load available config from `{project-root}/_bmad/config.yaml` and `{project-root}/_bmad/config.user.yaml` (root level and `bmm` section). If config is missing, let the user know `bmad-builder-setup` can configure the module at any time. Use sensible defaults for anything not configured.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At src/bmm-skills/1-analysis/bmad-prfaq/SKILL.md:24, the config load paths (_bmad/config.yaml / _bmad/config.user.yaml) don�t match the repo�s documented per-module config layout (_bmad/core/config.yaml + _bmad/<module>/config.yaml), so PRFAQ may not resolve required vars in real installs (Guideline: skill_validation).

Severity: high

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.


Load available config from `{project-root}/_bmad/config.yaml` and `{project-root}/_bmad/config.user.yaml` (root level and `bmm` section). If config is missing, let the user know `bmad-builder-setup` can configure the module at any time. Use sensible defaults for anything not configured.

Resolve: `{user_name}`, `{communication_language}`, `{document_output_language}`, `{planning_artifacts}`, `{project_name}`.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At src/bmm-skills/1-analysis/bmad-prfaq/SKILL.md:26, the resolved var list omits {project_knowledge}, but it�s referenced later for Artifact Analyzer scanning, which could lead to an undefined/incorrect scan root during execution.

Severity: medium

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

inputs: []
---

# {Headline}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In src/bmm-skills/1-analysis/bmad-prfaq/assets/prfaq-template.md:10, single-brace placeholders like {Headline} / {City, Date} look like skill variable references; many of these aren�t defined variables and may fail deterministic skill validation or get treated as template vars unexpectedly (Guideline: skill_validation).

Severity: high

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

Also update PRFAQ config path to use correct _config/bmm/ prefix.
@bmadcode bmadcode merged commit abfc56b into main Mar 28, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant