refactor(skills): convert validate-prd to native skill directory#1988
refactor(skills): convert validate-prd to native skill directory#1988
Conversation
🤖 Augment PR SummarySummary: This PR refactors the PRD validation workflow to a self-contained native skill so it can be installed/triggered consistently across supported IDE skill systems. Changes:
Technical Notes: The step and data files are copied verbatim into the skill, and the installer/manifest generation should now emit a 🤖 Was this summary useful? React with 👍 or 👎 |
📝 WalkthroughWalkthroughConverts a workflow-based PRD validation system to a native skill architecture, introducing a 13-step validation workflow named Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Important Merge conflicts detected
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
♻️ Duplicate comments (2)
src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-12-completeness-validation.md (2)
22-34:⚠️ Potential issue | 🔴 CriticalSame critical contradiction as step-v-10: autonomous execution vs. no content without input.
Line 22: "🛑 NEVER generate content without user input"
Line 34: "✅ This step runs autonomously - no user input needed"This is the same systemic issue present in step-v-10. The Universal Rules template is inappropriate for autonomous validation steps.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-12-completeness-validation.md` around lines 22 - 34, The file contains a contradictory policy: the phrase "🛑 NEVER generate content without user input" conflicts with "✅ This step runs autonomously - no user input needed"; update the step-v-12-completeness-validation.md content to remove or reconcile the contradiction by either (a) removing the user-input prohibition line when the step must run autonomously, or (b) changing "This step runs autonomously" to clarify it only runs automated checks without generating user-facing content, and ensure the Role Reinforcement section (phrases like "YOU ARE A FACILITATOR, not a content generator" and "This step runs autonomously - no user input needed") is consistent with the Universal Rules templates so the file communicates a single, non-conflicting execution mode.
215-215:⚠️ Potential issue | 🟠 MajorAuto-proceed without validation that report was written successfully.
Same issue as step-v-10: Line 215 auto-proceeds to step-v-13 without verifying that the completeness findings were successfully appended to the validation report.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-12-completeness-validation.md` at line 215, The doc auto-proceeds to "{nextStepFile} (step-v-13-report-complete.md)" with no verification that the completeness findings were appended; change the flow so that wherever you append or write the completeness findings (the code or step that produces the report entry used by step-v-12-completeness-validation.md), you check the write/append result (e.g., return value or caught exception from appendFindingsToReport / writeValidationReport) and only render the "Without delay, read fully and follow: {nextStepFile} (step-v-13-report-complete.md)" line when that check indicates success; on failure log/raise an error and do not include or auto-navigate to step-v-13 until a successful write is confirmed.
🟠 Major comments (20)
src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-03-density-validation.md-117-141 (1)
117-141:⚠️ Potential issue | 🟠 MajorAppend-only reporting is non-idempotent
Repeated runs can duplicate the “Information Density Validation” section, making reports noisy and hard to trust. Replace-or-upsert behavior is safer for reruns.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-03-density-validation.md` around lines 117 - 141, The reporting code currently appends the "## Information Density Validation" markdown block, which is non-idempotent; update the writer that emits this block (search for the exact header string "## Information Density Validation") to perform replace-or-upsert instead of blind append: detect an existing section by regex matching from the header to the next top-level header (or EOF), remove or replace that range, then write the new block (or implement an upsert helper like upsertValidationSection) so repeated runs overwrite the prior section rather than duplicating it.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-03-density-validation.md-104-113 (1)
104-113:⚠️ Potential issue | 🟠 MajorDouble-counting policy is undefined
A single phrase can match multiple categories and currently may be counted multiple times. Define precedence or dedupe rules to keep totals stable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-03-density-validation.md` around lines 104 - 113, The total-violation calculation currently sums counts from "Conversational filler", "Wordy phrases", and "Redundant phrases" which allows a single phrase to be double-counted; update the spec to define a deterministic dedupe rule: when computing "Total", de-duplicate matched phrases by ID/text (i.e., use a set/union) OR apply a clear precedence order (e.g., "Critical" precedence: Conversational filler > Redundant phrases > Wordy phrases) and count a phrase only once according to that precedence; update the "Calculate total violations" section and any code that computes Total to either perform set-based de-duplication across category matches or enforce the stated precedence so totals are stable and unambiguous.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-03-density-validation.md-62-75 (1)
62-75:⚠️ Potential issue | 🟠 MajorSubprocess return schema is underspecified
“Return structured findings” is too vague for deterministic downstream parsing. You need an explicit schema (fields/types) to prevent malformed handoff into the report section.
Proposed fix
-Return structured findings with counts and examples." +Return JSON with this exact schema: +{ + "conversationalFiller": [{"phrase": string, "line": number, "excerpt": string}], + "wordyPhrases": [{"phrase": string, "line": number, "excerpt": string}], + "redundantPhrases": [{"phrase": string, "line": number, "excerpt": string}], + "counts": {"conversationalFiller": number, "wordyPhrases": number, "redundantPhrases": number, "total": number}, + "severity": "Critical" | "Warning" | "Pass" +}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-03-density-validation.md` around lines 62 - 75, The subprocess output must be an explicit, deterministic schema named e.g. DensityValidationResult: include top-level fields totalViolations (integer), severity (string: "Critical"|"Warning"|"Pass"), counts (object with integer fields conversational, wordy, redundant), findings (array of objects each with category (string), line (integer), text (string), severityScore (integer)), and examples (object mapping category->array of example strings); ensure all numeric types are integers, strings are plain text, arrays are explicit, and severity is derived from totalViolations using the given thresholds (>10, 5-10, <5) so downstream parsing is reliable.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-08-domain-compliance-validation.md-64-74 (1)
64-74:⚠️ Potential issue | 🟠 MajorYou load
domain-complexity.csvbut then override it with hardcoded classificationsThe step explicitly loads
{domainComplexityData}but uses a hardcoded domain list for complexity decisions. That can diverge from data (e.g.,edtechis medium in the CSV context, but listed under high here), and medium-complexity handling is missing entirely.Also applies to: 86-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-08-domain-compliance-validation.md` around lines 64 - 74, The step currently loads {domainComplexityData} (domain-complexity.csv) but ignores it and uses a hardcoded domain list for complexity decisions; replace the hardcoded classifications with logic that parses and uses the loaded CSV data to determine domain complexity and required sections (use the parsed structure instead of the hardcoded list referenced in this step), ensure you implement handling for "medium" complexity (previously missing) so medium domains follow their CSV-specified flow, and update any conditional branches or arrays that reference the old hardcoded domains to consult the parsed domainComplexity mapping from domain-complexity.csv.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-07-implementation-leakage-validation.md-67-73 (1)
67-73:⚠️ Potential issue | 🟠 MajorLeakage rules are too broad for regulated products and can produce false positives
The scanner treats many concrete terms as leakage, but some are legitimate requirement-level constraints (compliance-required environments/standards, mandated protocols, interoperability formats). Current exceptions are too narrow and will over-report violations.
Also applies to: 86-113, 170-170
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-07-implementation-leakage-validation.md` around lines 67 - 73, The "Scan for:" leakage rules are too broad—concrete technology/library/protocol names listed (Technology names, Library names, Data structures, Architecture patterns, Protocol names) are being flagged even when they appear as legitimate requirement-level constraints; update the leakage-validation logic referenced by the "Scan for:" section so that detectors only flag terms when they appear in implementation-specific contexts (code snippets, implementation steps, secrets/credentials, or internal-only notes) and add a configurable allowlist for compliance-mandated terms (e.g., specific standards, required cloud providers, mandated protocols/formats) to avoid false positives; expand the exceptions set mentioned around the same section and in the other noted ranges so the scanner consults this contextual rule and allowlist before reporting violations.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-04-brief-coverage-validation.md-146-146 (1)
146-146:⚠️ Potential issue | 🟠 Major
{brief_file_name}is undefined in this step’s declared variablesThe report template references
{brief_file_name}, but frontmatter only declaresproductBrief: '{product_brief_path}'. This unresolved placeholder will leak into output.Proposed fix
-**Product Brief:** {brief_file_name} +**Product Brief:** {productBrief}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-04-brief-coverage-validation.md` at line 146, The template uses the undefined placeholder `{brief_file_name}` while the frontmatter exposes `productBrief`, so update the step to use the declared variable or add a matching frontmatter key: either replace `{brief_file_name}` with `{productBrief}` in the step content (step-v-04-brief-coverage-validation.md) or add a `brief_file_name: '{brief_file_name_value}'` entry to the frontmatter; ensure the variable name you choose matches exactly the placeholder used in the markdown so no unresolved placeholder is emitted.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-08-domain-compliance-validation.md-158-166 (1)
158-166:⚠️ Potential issue | 🟠 MajorLow-complexity branch needs an explicit termination after jumping to next step
After
Without delay, read fully and follow: {nextStepFile}, execution can still continue into section 6 in this same file unless you explicitly stop. That risks duplicate/contradictory report blocks.Proposed fix
Without delay, read fully and follow: {nextStepFile} +STOP - do not execute remaining sections in this step.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-08-domain-compliance-validation.md` around lines 158 - 166, The low-complexity branch must explicitly terminate after the jump to the next step to prevent execution from falling through into "### 6. Report Compliance Findings (High-Complexity Domains)"; update the block that currently reads 'Without delay, read fully and follow: {nextStepFile}' (and the preceding Display: "**Domain Compliance Validation Skipped**...") to append a clear termination directive (for example "STOP: do not continue in this file" or "END OF WORKFLOW FOR LOW-COMPLEXITY DOMAINS — proceed only to {nextStepFile}") so processing halts and section "### 6. Report Compliance Findings (High-Complexity Domains)" is not executed for low-complexity domains.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-07-implementation-leakage-validation.md-21-34 (1)
21-34:⚠️ Potential issue | 🟠 MajorInstruction conflict repeats here too (autonomous step vs no-generation rule)
This step cannot both run autonomously and refuse generation without user input while still appending findings/report content.
Also applies to: 46-48
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-07-implementation-leakage-validation.md` around lines 21 - 34, The step contains contradictory directives: "🛑 NEVER generate content without user input" vs "This step runs autonomously" and role lines like "YOU ARE A FACILITATOR, not a content generator" in step-v-07-implementation-leakage-validation.md; decide the intended behavior (autonomous validation) and reconcile by removing or editing the opposing lines so they consistently allow autonomous report generation, e.g., delete or reword "NEVER generate content without user input" and "YOU ARE A FACILITATOR" or change them to permit automated findings output, ensure the step header and the sentence "This step runs autonomously - no user input needed" remain authoritative, and keep the communication requirement `{communication_language}` intact.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-06-traceability-validation.md-21-34 (1)
21-34:⚠️ Potential issue | 🟠 MajorSame contradictory execution policy appears in this step
You require autonomous validation and report output while also forbidding generation without user input. This needs a single consistent rule set.
Also applies to: 46-48
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-06-traceability-validation.md` around lines 21 - 34, The step file contains conflicting policies: the bullet "🛑 NEVER generate content without user input" contradicts "This step runs autonomously - no user input needed"; update step-v-06-traceability-validation.md to adopt a single consistent policy (preferably allow autonomous validation/report generation for this automated step), remove or reword the "NEVER generate" line and any duplicate/conflicting lines (e.g., the communication/config rule and the "YOU MUST ALWAYS SPEAK OUTPUT..." line) so all role statements (Validation Architect, autonomous run, and communication style) align, and apply the same correction to the other similar block referenced ("Also applies to: 46-48") so the file no longer contains contradictory instructions.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-10-smart-validation.md-126-145 (1)
126-145:⚠️ Potential issue | 🟠 MajorDivision by zero: No guard for empty FR sets.
Lines 127-129 calculate percentages and averages but provide no handling for the case where the PRD contains zero Functional Requirements. This will cause undefined behavior (percentage calculations with 0 denominator).
Even if earlier steps are supposed to catch this, defensive coding requires a guard here.
Proposed defensive check
Add to line 126:
**Calculate overall FR quality:** +- If total FRs = 0, report: "No Functional Requirements found - cannot assess SMART quality" and skip to next step - Percentage of FRs with all scores ≥ 3 - Percentage of FRs with all scores ≥ 4 - Average score across all FRs and categories🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-10-smart-validation.md` around lines 126 - 145, The SMART validation step lacks a guard against an empty set of Functional Requirements, so add a defensive check around the calculations that use the total/count (the variables rendered into {count}, {total}, percentage and average): if total == 0 (or count == 0) short-circuit the percentage/average computation and set the "All scores ≥ 3", "All scores ≥ 4" percentages to 0% (or "N/A") and the overall average to 0 (or "N/A"), then render the validation report accordingly; update the "Calculate overall FR quality" block and the report generation in the SMART Requirements Validation section to use these guarded values to avoid division-by-zero.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/data/prd-purpose.md-36-47 (1)
36-47:⚠️ Potential issue | 🟠 MajorTraceability chain omits Non-Functional Requirements.
Lines 36-38 show the chain as:
Vision → Success Criteria → User Journeys → Functional Requirements → (future: User Stories)But NFRs are a critical artifact type mentioned throughout the document (lines 88-111, line 139, line 167, line 181). They're conspicuously absent from the traceability chain.
Where do NFRs fit? Do they trace to Success Criteria? Domain Requirements? This is a conceptual gap in the methodology.
Suggested expansion of the chain
**PRD starts the chain:**-Vision → Success Criteria → User Journeys → Functional Requirements → (future: User Stories)
+Vision → Success Criteria → User Journeys → Functional Requirements + Non-Functional Requirements → (future: User Stories)**In the PRD, establish:** - Vision → Success Criteria alignment - Success Criteria → User Journey coverage - User Journey → Functional Requirement mapping +- Success Criteria + Domain Requirements → Non-Functional Requirement derivation - All requirements traceable to user needs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/data/prd-purpose.md` around lines 36 - 47, Update the traceability chain string and accompanying bullet list to explicitly include Non-Functional Requirements (NFRs): change the chain line "Vision → Success Criteria → User Journeys → Functional Requirements → (future: User Stories)" to "Vision → Success Criteria → User Journeys → Functional Requirements + Non-Functional Requirements → (future: User Stories)" and add a bullet that maps how NFRs are derived (e.g., "Success Criteria + Domain Requirements → Non-Functional Requirement derivation"), ensuring the document's guidance (the bullet list under "**In the PRD, establish:**") and any references to "All requirements traceable to user needs" explicitly mention NFR traceability so NFRs are treated alongside functional requirements.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-10-smart-validation.md-182-182 (1)
182-182:⚠️ Potential issue | 🟠 MajorAuto-proceed without validating report write success.
Line 182 instructs the agent to auto-proceed to the next step without any verification that the validation report was successfully updated. If the file write failed, the agent moves forward with incomplete validation data.
Suggested safety check
Without delay, read fully and follow: {nextStepFile} (step-v-11-holistic-quality-validation.md) + +**Before proceeding:** Verify validation report contains "## SMART Requirements Validation" section. If missing, HALT with error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-10-smart-validation.md` at line 182, The step currently auto-proceeds to "{nextStepFile}" (step-v-11-holistic-quality-validation.md) without verifying the validation report was persisted; modify the step logic that issues "Without delay, read fully and follow: {nextStepFile}" so it first checks the report write result (catch file/IO errors and verify the write/flush/response), surface/log any error, retry or abort on failure, and only then advance to step-v-11; locate the report-writing routine invoked prior to the auto-proceed directive and add explicit success/failure handling around that write operation.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-09-project-type-validation.md-93-95 (1)
93-95:⚠️ Potential issue | 🟠 MajorDefaulting to
web_apphides classification failures and can generate false violations.If
classification.projectTypeis missing/unknown, forcingweb_appmakes the report look deterministic while using potentially wrong constraints. Treat this as unknown classification with explicit warning/severity instead of auto-assuming a concrete type.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-09-project-type-validation.md` around lines 93 - 95, The current logic that assumes "web_app" when classification.projectType is missing should be changed to treat missing/unknown as an explicit unknown classification: in the validation step that reads classification.projectType (refer to the projectType usage in step-v-09-project-type-validation.md and any code that applies constraints based on classification.projectType), stop defaulting to "web_app", instead set projectType to "unknown" (or leave undefined) and emit a finding/entry with clear warning severity indicating classification is missing and constraints were not applied; ensure downstream constraint checks skip applying web_app-specific rules when projectType is unknown and the findings summary documents the unknown classification and recommended next steps.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-13-report-complete.md-86-87 (1)
86-87:⚠️ Potential issue | 🟠 MajorFrontmatter template uses undeclared
{prd_path}token.This step declares
prdFile: '{prd_file_path}'but writesvalidationTarget: '{prd_path}'. That mismatch can leave unresolved placeholders in the saved report.Suggested fix
-validationTarget: '{prd_path}' +validationTarget: '{prd_file_path}'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-13-report-complete.md` around lines 86 - 87, The frontmatter in step-v-13-report-complete.md uses an undeclared token "{prd_path}" for validationTarget while the step declares prdFile: '{prd_file_path}'; update validationTarget to use the same token (change validationTarget: '{prd_path}' to validationTarget: '{prd_file_path}') so it matches prdFile, or alternatively declare a matching {prd_path} variable—ensure consistency between the prdFile and validationTarget tokens.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-02-format-detection.md-166-167 (1)
166-167:⚠️ Potential issue | 🟠 MajorExit branch terminates without finalizing report status.
If the user chooses
C (Exit)for non-standard PRDs, the workflow exits without a terminal report status update, leaving the report potentially stuck inIN_PROGRESS.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-02-format-detection.md` around lines 166 - 167, The exit branch "C (Exit)" in step-v-02-format-detection.md currently exits without updating the PRD report status; modify the branch to call the workflow's report-finalization routine (e.g., invoke the existing finalizeReport/updateReportStatus function) to set a terminal status (such as ABANDONED or COMPLETED) and persist the change before returning, and include a short final-status note like "user exited during format detection" so the report does not remain IN_PROGRESS.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-09-project-type-validation.md-82-91 (1)
82-91:⚠️ Potential issue | 🟠 MajorProject-type enum is out of sync with
project-types.csv.The listed “Common project types” don’t match the actual dataset keys (e.g.,
saas_b2b,developer_tool,cli_tool,iot_embedded,blockchain_web3,gameexist in CSV but are omitted;data_pipeline,ml_system,library_sdk,infrastructureare listed but not in CSV). This will drive incorrect lookups in downstream steps.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-09-project-type-validation.md` around lines 82 - 91, The "Common project types" list in step-v-09-project-type-validation.md is out of sync with project-types.csv causing mismatches; update the enum/list under "Common project types" to exactly match the keys in project-types.csv (add missing keys like saas_b2b, developer_tool, cli_tool, iot_embedded, blockchain_web3, game, etc. and remove entries not present in the CSV such as data_pipeline, ml_system, library_sdk, infrastructure), and then run a quick grep of downstream steps that reference this list to ensure all validations and lookup logic (the project-type enum/lookup usage) use the updated canonical set. Ensure the displayed list and any code or documentation referencing it are consistent with project-types.csv.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-13-report-complete.md-89-89 (1)
89-89:⚠️ Potential issue | 🟠 MajorFinal step history omits the optional parity branch.
validationStepsCompletedis hardcoded withoutstep-v-02b-parity-check, so reports lose traceability when non-standard PRDs ran parity analysis.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-13-report-complete.md` at line 89, The hardcoded validationStepsCompleted array omits the optional parity branch, so when a PRD ran parity analysis the final report loses that traceability; update the logic that builds validationStepsCompleted (replace the static list or augment construction in the report generation for validationStepsCompleted) to conditionally include 'step-v-02b-parity-check' whenever the workflow/run metadata indicates parity was executed (check the same flag or function that triggers the parity step) and ensure the resulting array written by the step that emits validationStepsCompleted includes that symbol so reports reflect the optional branch.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-13-report-complete.md-37-39 (1)
37-39:⚠️ Potential issue | 🟠 MajorThis final step violates its own “summary-only” contract by applying edits.
Step-specific rules forbid additional validation and scope this step to summarization, but option F performs direct fixes and report updates. This should be delegated to the edit workflow, not executed inline here.
Also applies to: 188-196
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-13-report-complete.md` around lines 37 - 39, The "Step V-13 (Report Complete)" text currently performs edits (Option F) which violates the step's summary-only rule; update the content so Option F and any similar lines (previously 188-196) no longer instruct performing fixes or updating reports but instead recommend or delegate those actions to the edit workflow (e.g., "Recommend fixes and open an edit task in the edit workflow"), remove any actionable verbs that imply direct edits or validation, and add a brief pointer that all corrections must be executed in the designated edit workflow.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-02b-parity-check.md-183-186 (1)
183-186:⚠️ Potential issue | 🟠 MajorExit paths don’t finalize report state before termination.
EandSexit the workflow, but there’s no instruction to set a terminal status in report frontmatter (e.g.,PAUSED/EXITED_AFTER_PARITY). This leaves lifecycle state ambiguous for downstream automation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-02b-parity-check.md` around lines 183 - 186, The E (Exit) and S (Save) branches in the parity menu terminate without updating the report frontmatter lifecycle state; update the exit paths so they write a terminal status field (e.g., set frontmatter "status" to "EXITED_AFTER_PARITY" on the E path and "PAUSED" or "SAVED_AFTER_PARITY" on the S path) before displaying the parity summary and exiting, and ensure the save flow persists the report with the new status and confirmation message; touch the code that handles the menu choices for "E" and "S" (the parity summary/exit logic) to perform the frontmatter update and persistence transaction prior to termination.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-09-project-type-validation.md-112-147 (1)
112-147:⚠️ Potential issue | 🟠 MajorHardcoded requirements conflict with the declared CSV-driven contract.
Step 3 says to derive required/skip sections from
project-types.csv, but this block redefines requirements manually and diverges from the CSV source. That creates conflicting validation outcomes depending on which instruction the agent follows.Suggested fix (make Step 4 fully data-driven)
-### 4. Validate Against CSV-Based Requirements - -**Based on project type, determine:** - -**api_backend:** -- Required: Endpoint Specs, Auth Model, Data Schemas, API Versioning -- Excluded: UX/UI sections, mobile-specific sections -... -**infrastructure:** -- Required: Infrastructure Components, Deployment, Monitoring, Scaling -- Excluded: Feature requirements (this is infrastructure, not product) +### 4. Validate Using CSV-Derived Requirements + +Use only the `required_sections` and `skip_sections` values loaded in Step 3 for the detected `projectType`. +Do not apply any additional hardcoded project-type mapping in this step.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-09-project-type-validation.md` around lines 112 - 147, The hardcoded requirement lists under the "Validate Against CSV-Based Requirements" block conflict with Step 3's CSV-driven contract; remove these manual per-project-type definitions and instead read/parse the project-types.csv at runtime (the same source referenced in Step 3) to produce the required and excluded section lists, then drive validation logic from that parsed CSV output; update the "Validate Against CSV-Based Requirements" logic to call the CSV loader/mapper used elsewhere (or a new function like loadProjectTypeRules) and use its returned rules for api_backend, web_app, mobile_app, desktop_app, data_pipeline, ml_system, library_sdk, and infrastructure validations.
🟡 Minor comments (7)
src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-03-density-validation.md-6-6 (1)
6-6:⚠️ Potential issue | 🟡 MinorTwo sources of truth for next step introduce drift risk
nextStepFilealready defines navigation, but Line 151 also hardcodes the filename in parentheses. If one changes, routing instructions diverge.Proposed fix
-Without delay, read fully and follow: {nextStepFile} (step-v-04-brief-coverage-validation.md) +Without delay, read fully and follow: {nextStepFile}Also applies to: 151-151
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-03-density-validation.md` at line 6, The file duplicates the next-step target by both setting nextStepFile ('./step-v-04-brief-coverage-validation.md') and hardcoding the same filename elsewhere (the parenthetical navigation text), creating two sources of truth; update the parenthetical/navigation text to reference nextStepFile (or render it dynamically) or remove the hardcoded filename so only nextStepFile determines routing, and ensure any displayed link or text uses that single variable (search for nextStepFile and the literal './step-v-04-brief-coverage-validation.md' in this document and replace the literal with a reference to nextStepFile).src/bmm/workflows/2-plan-workflows/bmad-validate-prd/data/prd-purpose.md-3-3 (1)
3-3:⚠️ Potential issue | 🟡 MinorTypo in opening sentence.
"rhw BMad Method" should be "the BMAD Method"
Fix
-**The PRD is the top of the required funnel that feeds all subsequent product development work in rhw BMad Method.** +**The PRD is the top of the required funnel that feeds all subsequent product development work in the BMAD Method.**🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/data/prd-purpose.md` at line 3, The opening sentence in prd-purpose.md contains a typo: replace "rhw BMad Method" with "the BMAD Method" in the sentence "**The PRD is the top of the required funnel that feeds all subsequent product development work in rhw BMad Method.**" so the line reads "**The PRD is the top of the required funnel that feeds all subsequent product development work in the BMAD Method.**"src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-10-smart-validation.md-104-107 (1)
104-107:⚠️ Potential issue | 🟡 MinorSubprocess prompt lacks format specification for improvement suggestions.
Line 105 requests "Provide specific improvement suggestions" for low-scoring FRs but doesn't specify the expected format, structure, or level of detail. The subprocess could return anything from single-word hints to multi-paragraph essays, breaking the downstream template rendering.
Suggested enhancement
**For each FR with score < 3 in any category:** -- Provide specific improvement suggestions +- Provide specific improvement suggestions (1-2 sentences per category, actionable)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-10-smart-validation.md` around lines 104 - 107, The subprocess prompt for "For each FR with score < 3" is underspecified—update it to require a strict, machine-parsable structure so downstream rendering can't break: require that for every low-scoring FR (score < 3) you output a JSON array/object or a markdown table keyed by FR id/title that includes (a) 2–3 numbered, actionable improvement suggestions per FR, each 1–2 concise sentences, (b) a severity tag (low/medium/high) and (c) a one-line estimated priority or next step; ensure the output format matches the "Return scoring table with all FR scores and improvement suggestions" expectation so the template can reliably parse suggestions.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-12-completeness-validation.md-72-73 (1)
72-73:⚠️ Potential issue | 🟡 MinorTemplate variable search patterns incomplete.
Line 72 searches for:
{variable}, {{variable}}, {placeholder}, [placeholder]This misses common template syntaxes:
${variable}(shell/JS style)%variable%(Windows batch style){{ variable }}(whitespace variations){variable_name}with underscores/hyphensIt also won't catch malformed variables like
{ variable }(spaces inside braces).-- Look for: {variable}, {{variable}}, {placeholder}, [placeholder], etc. +- Look for: {variable}, {{variable}}, ${variable}, %variable%, [placeholder], etc. +- Include whitespace variations: { variable }, {{ variable }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-12-completeness-validation.md` around lines 72 - 73, Update the "Look for:" template variable search list to cover additional syntaxes and whitespace/malformed variants: add patterns for ${variable}, %variable%, and allow whitespace inside double/single braces (e.g., {{ variable }} and { variable }), and accept names with underscores and hyphens (e.g., {variable_name}, {variable-name}); modify the check referenced by the "Look for:" list so it uses regex-like checks that match ${...}, %...%, both {{...}} and {{ ... }}, { ... } and {name_with_underscores-or-hyphens} and malformed spacing, and include examples in the output (the same list that currently contains `{variable}, {{variable}}, {placeholder}, [placeholder]`).src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-10-smart-validation.md-164-171 (1)
164-171:⚠️ Potential issue | 🟡 MinorSeverity threshold boundary ambiguity.
Line 166 defines:
- Critical if >30% flagged
- Warning if 10-30%
- Pass if <10%
What severity applies at exactly 10% or exactly 30%? The ranges have inclusive/exclusive boundaries that aren't specified, creating non-deterministic behavior at the edges.
Proposed fix: Explicit boundaries
-**Severity:** [Critical if >30% flagged FRs, Warning if 10-30%, Pass if <10%] +**Severity:** [Critical if ≥30% flagged FRs, Warning if 10-29%, Pass if <10%]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-10-smart-validation.md` around lines 164 - 171, Update the "Overall Assessment" severity boundaries to be explicit about inclusivity for the edge values in the "Overall Assessment" block (the lines defining "Critical if >30%, Warning if 10-30%, Pass if <10%"); choose and state clear operators for 30% and 10% (for example: "Critical if >=30%, Warning if >=10% and <30%, Pass if <10%") and replace the existing ambiguous lines with that explicit wording so the severity at exactly 10% and exactly 30% is deterministic.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-12-completeness-validation.md-242-242 (1)
242-242:⚠️ Potential issue | 🟡 MinorMaster Rule states "must be fixed" but step only reports, never fixes.
Line 242: "Template variables or critical gaps must be fixed."
But this step is autonomous validation with no user interaction. It can only report issues, not fix them. The Master Rule implies a remediation step that doesn't exist.
Either this step should HALT the workflow and demand fixes, or the Master Rule should accurately reflect that this is a reporting-only gate.
Proposed clarification
-**Master Rule:** Final gate to ensure document is complete before presenting findings. Template variables or critical gaps must be fixed. +**Master Rule:** Final gate to identify completeness issues before presenting findings. Reports template variables and critical gaps for user review.OR, if this should be a hard gate:
**Master Rule:** Final gate to ensure document is complete before presenting findings. Template variables or critical gaps must be fixed. + +**If Critical severity:** HALT workflow and display: "PRD has critical completeness gaps. Address issues before continuing validation."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-12-completeness-validation.md` at line 242, Update the Master Rule text or the step behavior in step-v-12-completeness-validation.md so they align: either change the Master Rule phrase "must be fixed" to indicate this is a reporting-only validation (e.g., "must be reported" or "requires remediation by a subsequent step") or implement a hard-gate behavior in the step (halt the workflow and mark failure) so that template variables/critical gaps stop progress; locate the Master Rule string in the document (the heading or line containing 'Master Rule: Final gate...') and either edit the wording to reflect reporting-only behavior or add logic/metadata to the step to enforce a halt/failure outcome when issues are found.src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-13-report-complete.md-199-201 (1)
199-201:⚠️ Potential issue | 🟡 MinorExit path does not explicitly persist a terminal validation status.
On
X, the step exits and invokes help, but doesn’t require writing a final lifecycle state to the report (e.g., COMPLETE/EXITED) right before termination.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-13-report-complete.md` around lines 199 - 201, The exit path currently invokes the bmad-help skill without persisting a terminal lifecycle status; before calling bmad-help, update the validation report at validationReportPath to write a final status field (e.g., "lifecycle": "COMPLETE" or "EXITED") and update the Display/summary (the lines that reference "{validationReportPath}" and "{overall status} - {recommendation}") to reflect that final state; ensure the code that performs the save is executed on the exit branch of the step (the same place that triggers bmad-help) so the report is written atomically before invoking bmad-help.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7e89c10d-d814-49f2-a71d-1dbe11329452
⛔ Files ignored due to path filters (3)
src/bmm/module-help.csvis excluded by!**/*.csvsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/data/domain-complexity.csvis excluded by!**/*.csvsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/data/project-types.csvis excluded by!**/*.csv
📒 Files selected for processing (21)
src/bmm/agents/pm.agent.yamlsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/SKILL.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/bmad-skill-manifest.yamlsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/data/prd-purpose.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-01-discovery.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-02-format-detection.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-02b-parity-check.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-03-density-validation.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-04-brief-coverage-validation.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-05-measurability-validation.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-06-traceability-validation.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-07-implementation-leakage-validation.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-08-domain-compliance-validation.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-09-project-type-validation.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-10-smart-validation.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-11-holistic-quality-validation.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-12-completeness-validation.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-13-report-complete.mdsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/workflow.mdsrc/bmm/workflows/2-plan-workflows/create-prd/bmad-skill-manifest.yamlsrc/bmm/workflows/2-plan-workflows/create-prd/workflow-validate-prd.md
💤 Files with no reviewable changes (1)
- src/bmm/workflows/2-plan-workflows/create-prd/bmad-skill-manifest.yaml
| 1. **Executive Summary** - Vision, differentiator, target users | ||
| 2. **Success Criteria** - Measurable outcomes (SMART) | ||
| 3. **Product Scope** - MVP, Growth, Vision phases | ||
| 4. **User Journeys** - Comprehensive coverage | ||
| 5. **Domain Requirements** - Industry-specific compliance (if applicable) | ||
| 6. **Innovation Analysis** - Competitive differentiation (if applicable) | ||
| 7. **Project-Type Requirements** - Platform-specific needs | ||
| 8. **Functional Requirements** - Capability contract (FRs) | ||
| 9. **Non-Functional Requirements** - Quality attributes (NFRs) |
There was a problem hiding this comment.
Required sections list doesn't match validation step implementation.
Lines 131-139 define required sections:
- Executive Summary
- Success Criteria
- Product Scope
- User Journeys
- Domain Requirements
- Innovation Analysis
- Project-Type Requirements
- Functional Requirements
- Non-Functional Requirements
But step-v-12-completeness-validation.md (lines 76-81) only checks:
- Executive Summary
- Success Criteria
- Product Scope
- User Journeys
- Functional Requirements
- Non-Functional Requirements
Domain Requirements, Innovation Analysis, and Project-Type Requirements are missing from the validation step.
Either the validation step is incomplete, or this list incorrectly labels optional sections as "Required."
This is a direct mismatch between specification and implementation.
Proposed alignment
Option 1: Update prd-purpose.md to distinguish mandatory vs. conditional sections:
### Required Sections
-1. **Executive Summary** - Vision, differentiator, target users
-2. **Success Criteria** - Measurable outcomes (SMART)
-3. **Product Scope** - MVP, Growth, Vision phases
-4. **User Journeys** - Comprehensive coverage
-5. **Domain Requirements** - Industry-specific compliance (if applicable)
-6. **Innovation Analysis** - Competitive differentiation (if applicable)
-7. **Project-Type Requirements** - Platform-specific needs
-8. **Functional Requirements** - Capability contract (FRs)
-9. **Non-Functional Requirements** - Quality attributes (NFRs)
+
+**Mandatory Sections (all PRDs):**
+1. **Executive Summary** - Vision, differentiator, target users
+2. **Success Criteria** - Measurable outcomes (SMART)
+3. **Product Scope** - MVP, Growth, Vision phases
+4. **User Journeys** - Comprehensive coverage
+5. **Functional Requirements** - Capability contract (FRs)
+6. **Non-Functional Requirements** - Quality attributes (NFRs)
+
+**Conditional Sections (based on domain/project type):**
+7. **Domain Requirements** - Industry-specific compliance (if applicable)
+8. **Innovation Analysis** - Competitive differentiation (if applicable)
+9. **Project-Type Requirements** - Platform-specific needs (if applicable)Option 2: Update step-v-12 to validate all 9 sections.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. **Executive Summary** - Vision, differentiator, target users | |
| 2. **Success Criteria** - Measurable outcomes (SMART) | |
| 3. **Product Scope** - MVP, Growth, Vision phases | |
| 4. **User Journeys** - Comprehensive coverage | |
| 5. **Domain Requirements** - Industry-specific compliance (if applicable) | |
| 6. **Innovation Analysis** - Competitive differentiation (if applicable) | |
| 7. **Project-Type Requirements** - Platform-specific needs | |
| 8. **Functional Requirements** - Capability contract (FRs) | |
| 9. **Non-Functional Requirements** - Quality attributes (NFRs) | |
| 1. **Executive Summary** - Vision, differentiator, target users | |
| 2. **Success Criteria** - Measurable outcomes (SMART) | |
| 3. **Product Scope** - MVP, Growth, Vision phases | |
| 4. **User Journeys** - Comprehensive coverage | |
| 5. **Functional Requirements** - Capability contract (FRs) | |
| 6. **Non-Functional Requirements** - Quality attributes (NFRs) | |
| **Conditional Sections (based on domain/project type):** | |
| 7. **Domain Requirements** - Industry-specific compliance (if applicable) | |
| 8. **Innovation Analysis** - Competitive differentiation (if applicable) | |
| 9. **Project-Type Requirements** - Platform-specific needs (if applicable) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/data/prd-purpose.md`
around lines 131 - 139, The PRD required-sections list in prd-purpose.md
currently includes Domain Requirements, Innovation Analysis, and Project-Type
Requirements but step-v-12-completeness-validation.md only validates Executive
Summary, Success Criteria, Product Scope, User Journeys, Functional
Requirements, and Non-Functional Requirements; fix by either (A) updating
prd-purpose.md to clearly separate mandatory vs. conditional/optional sections
(mark Domain Requirements, Innovation Analysis, Project-Type Requirements as
conditional and add brief inclusion criteria), or (B) update the validation
logic in step-v-12-completeness-validation.md to also check for the presence of
"Domain Requirements", "Innovation Analysis", and "Project-Type Requirements"
headers and fail/flag when missing; ensure header names match exactly between
prd-purpose.md and the validation checks so the implementation and spec are
aligned.
| - 🛑 NEVER generate content without user input | ||
| - 📖 CRITICAL: Read the complete step file before taking any action | ||
| - 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read | ||
| - 📋 YOU ARE A FACILITATOR, not a content generator | ||
| - ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` | ||
| - ✅ YOU MUST ALWAYS WRITE all artifact and document content in `{document_output_language}` | ||
|
|
||
| ### Role Reinforcement: | ||
|
|
||
| - ✅ You are a Validation Architect and Quality Assurance Specialist | ||
| - ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role | ||
| - ✅ We engage in systematic validation, not collaborative dialogue | ||
| - ✅ You bring requirements engineering expertise and quality assessment | ||
| - ✅ This step runs autonomously - no user input needed |
There was a problem hiding this comment.
Critical contradiction in execution rules.
Line 21 states "🛑 NEVER generate content without user input" but line 34 declares "This step runs autonomously - no user input needed." These are mutually exclusive directives that will confuse the executing agent.
The autonomous nature of validation steps makes sense, but the Universal Rules are copied verbatim from collaborative workflows where user input is expected. This step needs its own tailored rule set.
Proposed fix to resolve the contradiction
### Universal Rules:
-- 🛑 NEVER generate content without user input
+- 🛑 This is an autonomous validation step - execute without user input
- 📖 CRITICAL: Read the complete step file before taking any action🧰 Tools
🪛 LanguageTool
[style] ~31-~31: To make your writing flow more naturally, try moving the adverb ‘already’ closer to the verb ‘been’.
Context: ...Quality Assurance Specialist - ✅ If you already have been given communication or persona patterns...
(PERF_TENS_ADV_PLACEMENT)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-10-smart-validation.md`
around lines 21 - 34, The file contains a contradictory rule set: the line "🛑
NEVER generate content without user input" conflicts with "This step runs
autonomously - no user input needed"; update the step-specific rules in
step-v-10-smart-validation to remove or override the universal "NEVER generate
content..." directive for this validation step by replacing it with a clear,
single directive such as "This validation step may generate outputs autonomously
without user input" (or explicitly mark the universal rule as inapplicable
here), and ensure the role block (Validation Architect and Quality Assurance
Specialist) and autonomy statement are consistent so the agent knows it should
proceed autonomously for validation tasks.
| "Perform completeness validation on this PRD - final gate check: | ||
|
|
||
| **1. Template Completeness:** | ||
| - Scan PRD for any remaining template variables | ||
| - Look for: {variable}, {{variable}}, {placeholder}, [placeholder], etc. | ||
| - List any found with line numbers | ||
|
|
||
| **2. Content Completeness:** | ||
| - Executive Summary: Has vision statement? ({key content}) | ||
| - Success Criteria: All criteria measurable? ({metrics present}) | ||
| - Product Scope: In-scope and out-of-scope defined? ({both present}) | ||
| - User Journeys: User types identified? ({users listed}) | ||
| - Functional Requirements: FRs listed with proper format? ({FRs present}) | ||
| - Non-Functional Requirements: NFRs with metrics? ({NFRs present}) | ||
|
|
||
| For each section: Is required content present? (Yes/No/Partial) | ||
|
|
||
| **3. Section-Specific Completeness:** | ||
| - Success Criteria: Each has specific measurement method? | ||
| - User Journeys: Cover all user types? | ||
| - Functional Requirements: Cover MVP scope? | ||
| - Non-Functional Requirements: Each has specific criteria? | ||
|
|
||
| **4. Frontmatter Completeness:** | ||
| - stepsCompleted: Populated? | ||
| - classification: Present (domain, projectType)? | ||
| - inputDocuments: Tracked? | ||
| - date: Present? | ||
|
|
||
| Return completeness matrix with status for each check." |
There was a problem hiding this comment.
Subprocess prompt uses yes/no checks but expects Complete/Incomplete/Missing trichotomy.
Lines 76-81 instruct the subprocess to answer questions like:
- "Has vision statement? ({key content})"
- "All criteria measurable? ({metrics present})"
These are yes/no questions. But lines 112-118 expect each section to be classified as "Complete / Incomplete / Missing" — a three-way distinction.
The subprocess cannot answer the question it's being asked.
Proposed fix to align prompt with expected output
**2. Content Completeness:**
-- Executive Summary: Has vision statement? ({key content})
-- Success Criteria: All criteria measurable? ({metrics present})
-- Product Scope: In-scope and out-of-scope defined? ({both present})
+- Executive Summary: Complete (has all required content), Incomplete (partial content), or Missing?
+- Success Criteria: Complete (all measurable), Incomplete (some measurable), or Missing?
+- Product Scope: Complete (in-scope and out-of-scope defined), Incomplete (only one defined), or Missing?Apply to all section checks.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-12-completeness-validation.md`
around lines 68 - 97, The subprocess prompt asks binary yes/no questions (e.g.,
under "Content Completeness" items like "Has vision statement? ({key content})"
and "All criteria measurable? ({metrics present})") but the downstream output
expects a three-way status (Complete / Incomplete / Missing); update the
checklist items in "Template Completeness", "Content Completeness",
"Section-Specific Completeness", and "Frontmatter Completeness" to request the
trichotomy directly and to capture evidence: replace "Has X?" and other yes/no
phrasing with "Status (Complete / Incomplete / Missing) for X and provide
supporting evidence or missing elements (e.g., vision statement text or line
numbers, metrics present, users listed)", and ensure each bullet asks for the
explicit status and evidence so the subprocess can produce the expected
Complete/Incomplete/Missing matrix.
| - **IF E (Use Edit Workflow):** | ||
| - Explain: "The Edit workflow (steps-e/) can use this validation report to systematically address issues. Edit mode will guide you through discovering what to edit, reviewing the PRD, and applying targeted improvements." | ||
| - Offer: "Would you like to launch Edit mode now? It will help you fix validation findings systematically." | ||
| - If yes: Read fully and follow: steps-e/step-e-01-discovery.md |
There was a problem hiding this comment.
Edit workflow path is likely wrong from steps-v context.
steps-e/step-e-01-discovery.md resolves as a child of steps-v in relative resolution models. This should point to sibling directory (../steps-e/...) to avoid broken navigation.
Suggested fix
- - If yes: Read fully and follow: steps-e/step-e-01-discovery.md
+ - If yes: Read fully and follow: ../steps-e/step-e-01-discovery.md🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@src/bmm/workflows/2-plan-workflows/bmad-validate-prd/steps-v/step-v-13-report-complete.md`
at line 185, The relative link in step-v-13-report-complete.md currently points
to "steps-e/step-e-01-discovery.md", which will be resolved as a child of the
current steps-v directory and break navigation; update the link to use a
sibling-relative path ("../steps-e/step-e-01-discovery.md") so the reference
from step-v-13-report-complete.md correctly resolves to the steps-e directory.
e1bc2fd to
02848f9
Compare
Move validate-prd from a workflow entry in create-prd manifest to a self-contained skill at src/bmm/workflows/2-plan-workflows/bmad-validate-prd/. Update pm.agent.yaml and module-help.csv to use skill: URI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
02848f9 to
a136713
Compare
Summary
validate-prdfrom the sharedcreate-prd/bmad-skill-manifest.yamlinto a self-contained native skill atsrc/bmm/workflows/2-plan-workflows/bmad-validate-prd/pm.agent.yamlandmodule-help.csvto referenceskill:bmad-validate-prdinstead of the old workflow pathstandalone: falseto the sourceworkflow-validate-prd.mdto prevent duplicate workflow-manifest registrationsteps-v/(14 step files) anddata/(3 shared data files) verbatim into the new skill directoryTest plan
bmad-cli.js install).claude/skills/bmad-validate-prd/installed with SKILL.md, workflow.md, steps-v/, data/skill-manifest.csvcontainsbmad-validate-prdentryworkflow-manifest.csvhas no validate-prd entrynpm run qualitypasses (format, lint, markdown lint, docs build, schema validation, tests, file ref validation)🤖 Generated with Claude Code