prototype preview of new version of product brief skill#1959
Conversation
# Please enter a commit message to explain why this merge is necessary, # especially if it merges an updated upstream into a topic branch. # # Lines starting with '#' will be ignored, and an empty message aborts # the commit.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…-preview Drop the redundant bmm prefix from the product brief preview skill folder to align with the standard naming convention. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add comprehensive reference doc for all 11 built-in core tools (tasks and workflows) that ship with every BMad installation — bmad-help, brainstorming, party-mode, distillator, advanced-elicitation, both review tools, both editorial tools, shard-doc, and index-docs. Each entry follows the Configuration Reference structure with purpose, use cases, how it works, inputs, and outputs. Style fixes across existing docs: - reference/commands.md: convert #### headers to bold text, replace sparse task table with link to new core-tools reference - how-to/get-answers-about-bmad.md: remove horizontal rule between sections (Diataxis violation) - how-to/project-context.md: consolidate 4 consecutive tip admonitions into single admonition with bullet list, add AGENTS.md reference Also includes: - Add bmad-distillator task to core module with compression agents, format reference, splitting strategy, and analysis scripts - Add Distillator entry to module-help.csv - Rename supports-autonomous to supports-headless in product-brief manifest Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@coderabbitai review |
🤖 Augment PR SummarySummary: Prototype preview of a revamped “create product brief” capability and expanded core-tool documentation. Changes:
Technical Notes: The new preview product-brief flow relies on intra-skill prompt/agent file references and a JSON capability manifest ( 🤖 Was this summary useful? React with 👍 or 👎 |
| argument-hint: "[optional --create, --edit, --optimize, --distillate, --inputs, --headless] [brief idea]" | ||
| --- | ||
|
|
||
| # Create Product Brief |
There was a problem hiding this comment.
This directory lives under src/**/workflows/** but doesn’t include a workflow.md entry point, which conflicts with the repo’s workflow structure rule. If this is intended to be a workflow, consider adding workflow.md per (Guideline: workflow_entry_point_required).
Severity: high
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| --- | ||
| name: bmad-product-brief-preview | ||
| description: Create or update product briefs through guided or autonomous discovery. Use when the user requests to 'create a product brief', 'help me create a project brief', or 'update my product brief'. | ||
| argument-hint: "[optional --create, --edit, --optimize, --distillate, --inputs, --headless] [brief idea]" |
There was a problem hiding this comment.
The frontmatter argument-hint doesn’t mention --autonomous/-A or --yolo, but Activation Mode Detection relies on those flags, so users may not discover them. Consider aligning the hint with the supported activation flags.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
|
|
||
| **Launch in parallel:** | ||
|
|
||
| 1. **Artifact Analyzer** (`agents/artifact-analyzer.md`) — Scans `{planning_artifacts}` and `{project_knowledge}` for relevant documents. Also scans any specific paths the user provided. Returns structured synthesis of what it found. |
There was a problem hiding this comment.
These prompt files reference other files as agents/... and prompts/..., but from within the prompts/ directory those paths won’t resolve (typically should be ../agents/... and ./...). This is likely to break just-in-time loading / path validation for this workflow.
Severity: high
Other Locations
src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/contextual-discovery.md:17src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/contextual-discovery.md:41src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/contextual-discovery.md:45src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/contextual-discovery.md:50src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/contextual-discovery.md:56src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/contextual-discovery.md:57src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/draft-and-review.md:11src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/draft-and-review.md:39src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/draft-and-review.md:41src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/draft-and-review.md:68src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/draft-and-review.md:86src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/finalize.md:75src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/guided-elicitation.md:8src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/guided-elicitation.md:70
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| "description": "Produces executive product brief and optional LLM distillate for PRD input.", | ||
| "supports-headless": true, | ||
| "phase-name": "1-analysis", | ||
| "after": ["brainstorming, perform-research"], |
There was a problem hiding this comment.
after is set to ["brainstorming, perform-research"] as a single comma-separated string; if consumers expect an array of discrete capability names, dependency ordering may not work. Should these be two separate entries?
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
|
|
||
| You receive: source document file paths, an optional downstream_consumer context, and a splitting decision. | ||
|
|
||
| You must load and apply `resources/compression-rules.md` before producing output. Reference `resources/distillate-format-reference.md` for the expected output format. |
There was a problem hiding this comment.
This agent lives under agents/ but references resources/... without ../, so the referenced rule files won’t resolve relative to this file’s location. That could break compression/splitting guidance when the agent tries to load those resources.
Severity: high
Other Locations
src/core/skills/bmad-distillator/agents/distillate-compressor.md:34src/core/skills/bmad-distillator/agents/distillate-compressor.md:61src/core/skills/bmad-distillator/agents/distillate-compressor.md:83
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| ) | ||
| args = parser.parse_args() | ||
| analyze(args.inputs, args.output) | ||
| sys.exit(0) |
There was a problem hiding this comment.
The CLI always exits with code 0 (sys.exit(0)) even when analysis reports status: "error" (e.g., no files found), which makes it hard to use in scripts/CI reliably. Consider using a non-zero exit code when emitting an error status.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| @@ -1,30 +0,0 @@ | |||
| # BMad Master Task Executor Agent | |||
There was a problem hiding this comment.
This PR removes the bmad-master core agent definition; there are still references to core-bmad-master.customize.yaml in docs and to core/agents/bmad-master.md in tests, so installs/tests may drift. Consider updating/removing those references if bmad-master is intentionally retired.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
📝 WalkthroughWalkthroughThis PR introduces two major new skills (bmad-distillator and bmad-product-brief-preview) with comprehensive documentation, agents, and workflows. It removes legacy agent files, adds extensive documentation for core tools, and updates module configurations to reflect the new skill-based architecture. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Skill as Product Brief<br/>Preview Skill
participant S1 as Stage 1:<br/>Understand Intent
participant S2 as Stage 2:<br/>Contextual Discovery
participant AA as Artifact<br/>Analyzer
participant WR as Web<br/>Researcher
participant S3 as Stage 3:<br/>Guided Elicitation
participant S4 as Stage 4:<br/>Draft & Review
participant SR as Skeptic<br/>Reviewer
participant OR as Opportunity<br/>Reviewer
participant S5 as Stage 5:<br/>Finalize
User->>Skill: Activate with mode (Autonomous/Yolo/Guided)
Skill->>S1: Load config & greet
S1->>S1: Identify goal, detect brief-type,<br/>handle existing brief
S1->>S2: Route to Stage 2
S2->>AA: Fan-out: Analyze artifacts
S2->>WR: Fan-out: Research market
AA-->>S2: Extract context
WR-->>S2: Return findings
S2->>S2: Synthesize & identify gaps
S2->>S3: Route to Stage 3
S3->>S3: Conduct targeted questioning<br/>(skip in Yolo/Autonomous)
S3->>S4: Route to Stage 4
S4->>S4: Draft brief from template
S4->>SR: Fan-out: Critical review
S4->>OR: Fan-out: Opportunity review
SR-->>S4: Flag gaps & assumptions
OR-->>S4: Identify untapped value
S4->>S4: Triage & apply improvements
S4->>S5: Route to Stage 5
S5->>S5: Polish brief & optionally<br/>create distillate
S5->>User: Present completion with<br/>file paths & next actions
sequenceDiagram
participant User
participant Skill as Distillator Skill
participant AA as Analyze:<br/>Sources & Config
participant CC as Compress:<br/>Compressor Agent
participant VV as Verify &<br/>Output
participant RR as Round-Trip:<br/>Reconstructor
participant User2 as QA/Review
User->>Skill: Invoke with source documents
Skill->>AA: Stage 1: Analyze inputs
AA->>AA: Enumerate files, detect types,<br/>estimate tokens, group related docs
AA-->>Skill: Return analysis report
Skill->>CC: Stage 2: Compress to distillate
CC->>CC: Extract discrete info,<br/>deduplicate, filter,<br/>group thematically,<br/>compress language
CC-->>Skill: Return distillate (single or split)
Skill->>VV: Stage 3: Verify & Output
VV->>VV: Format output, validate<br/>against schema
VV-->>Skill: Return distillate files
Skill->>RR: Stage 4: Round-Trip Validation
RR->>RR: Reconstruct original docs<br/>from distillate,<br/>flag gaps
RR-->>Skill: Return reconstruction report
Skill-->>User: Deliver distillate + validation
User2->>User2: Review for completeness
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can enforce grammar and style rules using `languagetool`.Configure the |
There was a problem hiding this comment.
Actionable comments posted: 1
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (21)
docs/how-to/project-context.md-8-8 (1)
8-8:⚠️ Potential issue | 🟠 MajorConflicting scope claims for where
project-context.mdis appliedLine 8 says this drives behavior “throughout all workflows,” but Line 119 narrows it to “every implementation workflow.” Those are not equivalent and will mislead users about expected impact. Pick one scope and keep it consistent across the page.
Based on learnings: “Treat project-context.md as a singleton metadata file in BMAD workflows… use
**/project-context.mdto locate workflow context regardless of folder structure.”Also applies to: 119-119
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/how-to/project-context.md` at line 8, The doc currently uses conflicting scope language ("throughout all workflows" vs "every implementation workflow"); pick a single, consistent scope statement (e.g., "Treat project-context.md as a singleton metadata file in BMAD workflows and apply its rules to all implementation and agent workflows") and update both occurrences that reference project-context.md (the phrase "throughout all workflows" and the phrase "every implementation workflow") to that chosen wording; also add the recommended locator guidance by inserting the glob suggestion `**/project-context.md` (as text) near the scope line so readers know how to find the file regardless of folder structure.docs/reference/core-tools.md-118-124 (1)
118-124:⚠️ Potential issue | 🟠 Major
bmad-distillatorinput contract is incomplete (output_pathis missing)The documented inputs omit
output_path, which is part of the skill activation contract. This creates a mismatch between docs and runtime behavior.Proposed doc fix
**Input:** - `source_documents` (required) — File paths, folder paths, or glob patterns - `downstream_consumer` (optional) — What consumes this (e.g., "PRD creation") - `token_budget` (optional) — Approximate target size +- `output_path` (optional) — Where to save the distillate(s) - `--validate` (flag) — Run round-trip reconstruction testBased on learnings: use skill-local contracts as canonical behavior source (here aligned with
src/core/skills/bmad-distillator/SKILL.mdactivation inputs).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/reference/core-tools.md` around lines 118 - 124, The docs entry for the bmad-distillator input contract is missing the output_path parameter; update docs/reference/core-tools.md so the listed inputs for bmad-distillator match the skill activation contract in src/core/skills/bmad-distillator/SKILL.md by adding `output_path` (marking it required/optional and giving a short description), and verify the other fields (source_documents, downstream_consumer, token_budget, --validate) match names and optionality in SKILL.md.docs/reference/commands.md-106-107 (1)
106-107:⚠️ Potential issue | 🟠 MajorSection taxonomy conflicts with the linked Core Tools content
Lines 106-107 define this section as “Tasks and tools” that don’t require workflow context, but Line 122 points to a page that explicitly includes workflows (e.g.,
bmad-brainstorming,bmad-party-mode). Either rename this section to “Core Skills” or constrain the linked claim to true task/tool-only content.Also applies to: 122-123
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/reference/commands.md` around lines 106 - 107, The section title "Tasks and tools" conflicts with the linked Core Tools content that includes workflows (e.g., bmad-brainstorming, bmad-party-mode); either rename the section header from "Tasks and tools" to "Core Skills" or update the text and the link target so it only points to task/tool-only content (remove or replace workflow examples and change the description accordingly); locate the header and the link text in docs/reference/commands.md (search for the phrase "Tasks and tools" and the link referencing Core Tools / bmad-*) and make the chosen change so the section label and the linked content are consistent.docs/reference/core-tools.md-164-164 (1)
164-164:⚠️ Potential issue | 🟠 MajorLine 164 overstates the skill's actual contract
The documented requirement "Must find a minimum of 10 issues or re-analyzes deeper" misrepresents the skill's HALT condition. The actual skill (src/core/skills/bmad-review-adversarial-general/workflow.md, line 31) only mandates: "HALT if zero findings — this is suspicious, re-analyze or ask for guidance."
The "10+" figure at line 171 refers to the expected output format, not an enforced minimum for the analysis to pass. Documentation should clarify that re-analysis is triggered by zero findings, not by finding fewer than 10 issues.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/reference/core-tools.md` at line 164, Update the docs/reference/core-tools.md text that currently reads "Must find a minimum of 10 issues or re-analyzes deeper" to accurately reflect the skill HALT condition by stating that the skill only HALTs and triggers re-analysis when there are zero findings (see src/core/skills/bmad-review-adversarial-general/workflow.md which contains "HALT if zero findings — this is suspicious, re-analyze or ask for guidance"), and clarify that the "10+" mention later refers to expected output format rather than an enforced minimum for passing the analysis.src/core/skills/bmad-distillator/scripts/analyze_sources.py-127-127 (1)
127-127:⚠️ Potential issue | 🟠 MajorFile map keyed by filename may cause collisions across directories.
file_map = {f.name: f for f in files}uses only the filename as key. If inputs include files with the same name from different directories (e.g.,subdir1/brief.mdandsubdir2/brief.md), only one will be retained in the map. This could cause incorrect grouping or missed files.Proposed fix using full path as key
- file_map = {f.name: f for f in files} + # Map by filename for grouping lookup; if duplicates exist, keep first occurrence + file_map: dict[str, Path] = {} + for f in files: + if f.name not in file_map: + file_map[f.name] = fOr better, redesign grouping to handle same-name files in different directories by using full paths.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/scripts/analyze_sources.py` at line 127, The current mapping file_map = {f.name: f for f in files} in analyze_sources.py uses only f.name which allows collisions for same filenames in different directories; change the key to a unique identifier (e.g., f.path, f.full_path, or os.path.join(f.dirname, f.name) depending on the File object) or otherwise adjust the grouping logic that consumes file_map so it uses full paths (reference the file_map construction and any consumers in analyze_sources.py that rely on file_map) to ensure files with identical names from different directories are kept distinct.src/core/skills/bmad-distillator/agents/distillate-compressor.md-71-77 (1)
71-77:⚠️ Potential issue | 🟠 MajorConflicting format rules will cause unstable outputs.
Line 75 says “No prose paragraphs,” but Line 94 requires a sentence-style context header for each section. Those directives directly conflict and can produce unpredictable formatting from the compressor.
Also applies to: 94-94
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/agents/distillate-compressor.md` around lines 71 - 77, The markdown contains two conflicting rules: "No prose paragraphs" and "sentence-style context header for each section"; fix by removing or changing the sentence-style requirement so headers are terse theme labels only (use only "##" headings as theme titles) and ensure sections contain only dense, self-contained "- " bullets (use semicolons to join short related items within a bullet); update the directive text that currently enforces sentence-style context headers to instead require short/theme-only "##" headings so the compressor produces stable, bullets-only output.src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/draft-and-review.md-29-31 (1)
29-31:⚠️ Potential issue | 🟠 MajorFrontmatter example can produce invalid metadata if copied literally.
Line 29 uses a placeholder list literal (
[list of input files used]) without schema-safe example structure. This is easy for agents to emit verbatim, which breaks downstream parsers expecting an actual YAML array of file paths.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/draft-and-review.md` around lines 29 - 31, The frontmatter "inputs:" placeholder currently shows a literal "[list of input files used]" which can be copied verbatim and break YAML parsing; update the frontmatter in draft-and-review.md so the inputs field is a schema-safe example (e.g., an explicit YAML array of example file paths or a commented hint) instead of the free-text placeholder, ensuring the "inputs:" entry is valid YAML and safe for downstream parsers to consume.src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/web-researcher.md-25-47 (1)
25-47:⚠️ Potential issue | 🟠 MajorResearch output omits citations and publish dates, so claims are unverifiable.
Line 25-Line 47 require synthesized bullets only. Add per-item provenance (
source_url,publisher,published_at) so downstream stages can resolve conflicts and stale claims safely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/web-researcher.md` around lines 25 - 47, The JSON output schema (keys competitive_landscape, market_context, user_sentiment, timing_and_opportunity, risks_and_considerations) must include per-item provenance so claims are verifiable: for each array entry add source_url, publisher, and published_at fields (e.g., each competitive_landscape object and each bullet in other sections should become an object that includes the original content plus source_url, publisher, published_at). Preserve the requirement to return ONLY the single JSON object with maximum 5 bullets per section and no extra preamble or commentary; update the template in web-researcher.md to output synthesized bullets (not raw links) and attach those provenance fields to every item.src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/artifact-analyzer.md-25-25 (1)
25-25:⚠️ Potential issue | 🟠 Major“Read all relevant documents in parallel” is unbounded and can exceed context budgets.
Line 25 has no cap or prioritization strategy. On large repositories, this can trigger context overflow and degrade extraction quality. Add deterministic limits/ranking (e.g., top-N by relevance, then progressive expansion).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/artifact-analyzer.md` at line 25, The step "Read all relevant documents in parallel" is unbounded and must be constrained: replace it with a deterministic read strategy that first ranks candidates (e.g., by TF-IDF/similarity) and reads only a capped top-N (configurable, e.g., top_10), then progressively expands (e.g., top_20) if coverage is insufficient; include an explicit prioritization rule and max-context budget parameter so the analyzer will never issue reads that exceed the context limit.src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/artifact-analyzer.md-14-20 (1)
14-20:⚠️ Potential issue | 🟠 MajorDirectory scan rules need explicit secret-file exclusions.
Line 14-Line 20 encourage broad discovery but do not blacklist sensitive artifacts (
.env*, private keys, token dumps, credential exports). Add an explicit denylist before reading any files.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/artifact-analyzer.md` around lines 14 - 20, The directory-scan step ("Scan the provided directories" block) lacks an explicit denylist for sensitive files; update the artifact-analyzer scanning logic or the documentation block to add a pre-scan denylist that is checked before reading files (e.g., patterns like .env*, *.pem, id_rsa, *.key, *.crt, *.p12, .aws/*, .git-credentials, *token*, *credential*, *secret*, *api_key*), ensure this denylist is enforced prior to processing or indexing and called out in the doc so the scanner (or any function implementing the scan) will skip and not open any matching paths.src/core/skills/bmad-distillator/agents/distillate-compressor.md-85-107 (1)
85-107:⚠️ Potential issue | 🟠 MajorSplit-mode return contract is underspecified and can break downstream verification.
Line 89-Line 95 define root + section distillates, but Line 102-Line 107 only returns one
distillate_contentplus optional section blobs. That leaves the caller unable to distinguish whetherdistillate_contentis root-only, merged, or section content, which can corrupt completeness checks and file emission decisions.Suggested contract tightening
{ - "distillate_content": "{the complete distillate text without frontmatter}", + "distillate_content": "{root distillate text for split mode; full distillate for single mode}", "source_headings": ["heading 1", "heading 2"], "source_named_entities": ["entity 1", "entity 2"], "token_estimate": N, - "sections": null or [{"topic": "...", "content": "..."}] + "split_mode": false, + "sections": null or [ + {"topic": "...", "part": 1, "part_count": 3, "content": "..."} + ] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/agents/distillate-compressor.md` around lines 85 - 107, The split-mode return contract is ambiguous: make the response explicit by adding either separate fields "root_distillate" and "section_distillates" (array) or a "distillate_type" enum plus clearly typed fields so callers can unambiguously tell if distillate_content is the root, merged, or a single section; update the schema that includes existing fields ("distillate_content", "sections", "source_headings", "source_named_entities", "token_estimate") and ensure "sections" is used only for section-level blobs while "root_distillate" holds the top-level summary, or alternatively add "distillate_kind" with values like "root", "merged", "sectioned" and enforce that when "distillate_kind" == "sectioned" the "sections" array is non-null and "distillate_content" is clearly documented as either empty or a merged view.src/core/skills/bmad-distillator/agents/round-trip-reconstructor.md-5-5 (1)
5-5:⚠️ Potential issue | 🟠 MajorInput contract excludes split distillates and will fail in valid workflow paths.
Line 5 assumes a single file, but split mode produces multi-file output. The reconstructor needs to accept either a file path or a distillate folder/index contract, otherwise validation coverage is incomplete.
Also applies to: 11-15
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/agents/round-trip-reconstructor.md` at line 5, The reconstructor currently assumes a single "distillate file path" input; update the round-trip-reconstructor logic to accept and validate either a single file path OR a split distillate contract (a distillate folder plus index/manifest) so split-mode outputs are supported; locate the reconstructor entry (references to "distillate file path", "split mode" or "distillate folder/index contract") and modify input parsing/validation to detect a folder/manifest, load the index, and iterate files when present while preserving existing single-file behavior and tests.src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/contextual-discovery.md-15-16 (1)
15-16:⚠️ Potential issue | 🟠 MajorDocument scanning path has no secret-exclusion guardrails.
Line 15 includes broad artifact/project folders, and degraded mode still performs ad-hoc reads. Add explicit exclusions for credentials/secrets (
*.env, keys, tokens, auth dumps) before analysis to avoid accidental leakage into prompts.Also applies to: 21-24
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/contextual-discovery.md` around lines 15 - 16, The Artifact Analyzer's scanning logic (agents/artifact-analyzer.md / the component that reads {planning_artifacts} and {project_knowledge}, including degraded-mode ad-hoc readers) lacks secret-exclusion guardrails; update the scanner to pre-filter paths and file contents by rejecting known secret patterns and filenames (e.g., *.env, .env.*, *key*, *token*, *private*, auth dumps, credential files) and to skip binary or credential-like blobs before any analysis or synthesis; ensure the same exclusion/filtering is applied in degraded-mode readers and any functions named like "scanArtifacts", "scanProjectKnowledge", or similar so secrets are never ingested into prompts, and add a configurable denylist and a safe-read fallback that logs excluded items without returning their contents.src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/skeptic-reviewer.md-28-39 (1)
28-39:⚠️ Potential issue | 🟠 MajorMost finding types lack section anchors, making triage noisy.
Line 28-Line 39 schemas for gaps/assumptions/risks omit where in the brief the issue occurs. Without location metadata, Stage 4 integration has to guess and may patch the wrong section.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/skeptic-reviewer.md` around lines 28 - 39, The schema entries for "critical_gaps", "untested_assumptions", "unacknowledged_risks", and "vague_areas" lack location metadata, so add a consistent "section" (or "location") field to each object in those arrays (e.g., {"section": "Executive Summary", "issue": "...", ...}) and update the example objects to include that key; ensure the change is applied to the schemas for "critical_gaps", "untested_assumptions", "unacknowledged_risks", and "vague_areas" in the skeptic-reviewer definitions so downstream stages can unambiguously map issues to the brief.src/bmm/workflows/1-analysis/bmad-product-brief-preview/SKILL.md-4-4 (1)
4-4:⚠️ Potential issue | 🟠 MajorSeveral advertised arguments have no defined handling path.
Line 4 exposes
--create,--edit,--optimize,--distillate, and--inputs, but Stage 1 + mode detection doesn’t define branches for these. That creates a broken user-facing contract.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/SKILL.md` at line 4, The SKILL.md exposes flags in argument-hint that aren’t handled by Stage 1 mode detection; either remove the unsupported flags from the argument-hint string or implement handling branches for them in the Stage 1 mode detection logic so the contract is consistent—specifically update the argument-hint line (argument-hint: "...") to only list supported modes or add explicit branches for --create, --edit, --optimize, --distillate and --inputs in the Stage 1/mode-detection codepath (the mode detection function/class) so each flag has a defined handling path.src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/opportunity-reviewer.md-24-24 (1)
24-24:⚠️ Potential issue | 🟠 MajorThe fixed “2-3 per section” quota encourages hallucinated opportunities.
Line 24 requires 2-3 items for every section even when evidence is weak or absent. This can force invented content instead of trustworthy review output.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/opportunity-reviewer.md` at line 24, The template line that currently says "Return ONLY the following JSON object... Focus on the 2-3 most impactful opportunities per section" (the string on line 24 in opportunity-reviewer.md) forces exactly 2-3 items and causes hallucinations; change the wording and any enforcement logic so sections can return 0–3 items based on evidence, and require that if no supported opportunities exist the section returns an empty array or explicit null instead of fabricating items; update the instruction string and any validation that checks for exactly 2-3 items to allow a variable count and to validate each item against source evidence before acceptance.src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/finalize.md-21-24 (1)
21-24:⚠️ Potential issue | 🟠 MajorHeadless flow conflicts with interactive distillate prompt.
Lines 21-24 require asking the user before creating a distillate, while Line 50 says headless mode always creates it automatically. This is contradictory and can yield inconsistent stage behavior.
Also applies to: 50-50
src/core/skills/bmad-distillator/SKILL.md-140-141 (1)
140-141:⚠️ Potential issue | 🟠 Major
--validateshould return explicit failure when validation is impossible.When subagents are unavailable, Line 140-141 skips round-trip validation even if
--validatewas requested. This should not silently degrade; return a non-success validation status so callers can enforce quality gates.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/SKILL.md` around lines 140 - 141, The validation flow currently silently skips round-trip validation when subagent spawning is unavailable; update the logic that handles the --validate flag (the round-trip validation branch) so that if subagent spawning is unavailable it returns an explicit non-success validation result (e.g., set validation status to FAILED and propagate a non-zero exit code or throw a validation error) instead of skipping; ensure the check for "subagent spawning unavailable" is performed early in the --validate path and the failure is surfaced to callers so quality gates can detect it.src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/guided-elicitation.md-51-51 (1)
51-51:⚠️ Potential issue | 🟠 MajorAdd explicit redaction boundaries before storing overflow details.
Line 51 instructs silent capture of out-of-scope detail, but it never excludes secrets/PII. That can leak sensitive data into distillates and downstream artifacts.
Proposed fix
-If the user is giving you detail beyond brief scope (requirements, architecture, platform details, timelines), **capture it silently** for the distillate. Acknowledge it briefly ("Good detail, I'll capture that") but don't derail the conversation. +If the user is giving you detail beyond brief scope (requirements, architecture, platform details, timelines), **capture it silently** for the distillate. Acknowledge it briefly ("Good detail, I'll capture that") but don't derail the conversation. +Before persisting overflow notes, redact or omit sensitive values (credentials, secrets, private identifiers, proprietary keys).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/guided-elicitation.md` at line 51, Update the "silent capture" guidance in guided-elicitation.md so it explicitly enforces redaction before storing overflow details: modify the sentence that begins "If the user is giving you detail beyond brief scope..." to require wrapping any captured overflow in explicit redaction boundaries (e.g., "[REDACTED START]" and "[REDACTED END]"), automatically scan and redact PII/secrets (emails, phone numbers, SSNs, API keys, credentials, and other configured sensitive patterns) by replacing them with a <REDACTED> token, and record a short audit note "Redaction applied" in the distillate metadata; ensure the instruction mentions the exact phrase to change ("capture it silently") so reviewers can find and update that line.src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/opportunity-reviewer.md-26-42 (1)
26-42:⚠️ Potential issue | 🟠 MajorOutput schema lacks source traceability for each finding.
Current objects include opportunity text/rationale but no reference to the brief section or anchor. Draft triage can’t reliably validate or merge findings without provenance.
Proposed schema enhancement
-{"opportunity": "adjacent problem or value prop", "rationale": "why it matters"} +{"opportunity": "adjacent problem or value prop", "rationale": "why it matters", "source_section": "brief heading or anchor"}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/opportunity-reviewer.md` around lines 26 - 42, Add provenance fields to every finding so each item in untapped_value, positioning_opportunities, growth_and_scale, strategic_partnerships, and underemphasized_strengths includes where it came from; change each array element from plain string/object to include a source object (e.g., source_section, source_anchor or character offsets) and optionally a confidence score, update any schema/validation code that references these keys to require source, and adjust consumers/parsers to read source_section/source_anchor so triage can trace each finding back to the brief.src/core/skills/bmad-distillator/SKILL.md-115-125 (1)
115-125:⚠️ Potential issue | 🟠 MajorFix the JSON contract notation to be unambiguous.
The documented output (lines 115-125) uses invalid JSON syntax with narrative operators:
["{path1}", "{path2}"] or nulland"pass" or "pass_with_additions". These "or" constructs will confuse agents or downstream JSON parsers expecting valid syntax.Rewrite the contract using explicit JSON Schema (e.g.,
oneOf,anyOf, orenumproperties) or introduce a clear template notation at the block's start (e.g., "Template values in curly braces;orindicates alternatives"). The current notation is ambiguous and error-prone.Current problematic block:
{ "status": "complete", "distillate": "{path or folder path}", "section_distillates": ["{path1}", "{path2}"] or null, "source_total_tokens": N, "distillate_total_tokens": N, "compression_ratio": "X:1", "source_documents": ["{path1}", "{path2}"], "completeness_check": "pass" or "pass_with_additions" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/SKILL.md` around lines 115 - 125, Replace the ambiguous JSON example in SKILL.md by making the types and alternatives explicit: clarify that template placeholders are in curly braces and change "section_distillates" to be explicitly either an array or null (e.g., "section_distillates": ["{path1}", "{path2}"] or null → document as "section_distillates": either an array of strings or null), and change "completeness_check" to an explicit enum of allowed values (e.g., "completeness_check": one of "pass" or "pass_with_additions"); alternatively replace the example block with a short JSON Schema (or a commented template header) that defines field types for status, distillate, section_distillates, source_total_tokens, distillate_total_tokens, compression_ratio, source_documents, and completeness_check so downstream parsers/agents can unambiguously validate the contract.
🟡 Minor comments (5)
docs/reference/commands.md-120-120 (1)
120-120:⚠️ Potential issue | 🟡 MinorSecond subsection has the same structural regression (bold instead of heading)
This repeats the same problem at Line 120: non-heading text makes the section harder to target and cross-reference. Use a proper heading level for this subsection too.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/reference/commands.md` at line 120, The subsection titled "**Other Core Tasks and Tools**" is formatted as bold text instead of a proper markdown heading; change that bold line to the appropriate heading syntax (e.g., add one or more leading # characters to match the document's heading level) so "Other Core Tasks and Tools" becomes a real section heading consistent with nearby headings and can be targeted and cross-referenced.docs/reference/core-tools.md-8-8 (1)
8-8:⚠️ Potential issue | 🟡 MinorOpening sentence contains a duplicated word that changes meaning
“in conjunction with any anything you are doing” is a content defect and reads like accidental duplication. Fix this before publishing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/reference/core-tools.md` at line 8, The opening sentence in the paragraph containing "in conjunction with any anything you are doing" has an accidental duplicated word; edit that sentence in the docs so it reads "in conjunction with anything you are doing" (remove the extra "any") and verify the surrounding sentence still reads smoothly in the paragraph that starts "Every BMad installation includes a set of core skills..." to ensure grammar and meaning are preserved.src/core/skills/bmad-distillator/scripts/analyze_sources.py-201-211 (1)
201-211:⚠️ Potential issue | 🟡 MinorVariable naming misleads:
size/total_charsare bytes, not characters.
f.stat().st_sizereturns file size in bytes, not character count. For UTF-8 files with multibyte characters (non-ASCII), bytes > characters, causing token overestimation. The variabletotal_charsis particularly misleading since it actually holds total bytes.Proposed fix for clarity (rename variables)
# Analyze each file file_details = [] - total_chars = 0 + total_bytes = 0 for f in files: size = f.stat().st_size - total_chars += size + total_bytes += size file_details.append({ "path": str(f), "filename": f.name, "size_bytes": size, "estimated_tokens": size // CHARS_PER_TOKEN, "doc_type": detect_doc_type(f.name), }) - total_tokens = total_chars // CHARS_PER_TOKEN + total_tokens = total_bytes // CHARS_PER_TOKEN🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/scripts/analyze_sources.py` around lines 201 - 211, The code uses f.stat().st_size (bytes) but names variables as if they are characters; rename the local variable size to size_bytes and total_chars to total_bytes, update any references (including the accumulation and the file_details entry) to use size_bytes/total_bytes so the intent is clear, and add a short comment by the estimated_tokens calculation noting this is a bytes-based token estimate using CHARS_PER_TOKEN (or replace the bytes-based estimate with an actual character count/read if you need accuracy); update references to these identifiers where used in analyze_sources.py (e.g., the loop building file_details and the CHARS_PER_TOKEN usage).src/core/skills/bmad-distillator/scripts/tests/test_analyze_sources.py-192-197 (1)
192-197:⚠️ Potential issue | 🟡 MinorHardcoded
/tmppath and unprotected cleanup.Line 193 uses a hardcoded
/tmp/test_analyze_empty.jsonpath, flagged by static analysis (S108) as potentially insecure. Additionally,os.unlink(output_file)on line 197 has no error handling — if the test fails before file creation, cleanup will raiseFileNotFoundError.Proposed fix using tempfile and proper cleanup
def test_no_files_found(self): - output_file = "/tmp/test_analyze_empty.json" - analyze(["/nonexistent/path"], output_file) - result = json.loads(Path(output_file).read_text()) - assert result["status"] == "error" - os.unlink(output_file) + with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f: + output_file = f.name + try: + analyze(["/nonexistent/path"], output_file) + result = json.loads(Path(output_file).read_text()) + assert result["status"] == "error" + finally: + if os.path.exists(output_file): + os.unlink(output_file)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/scripts/tests/test_analyze_sources.py` around lines 192 - 197, Replace the hardcoded "/tmp/test_analyze_empty.json" in test_no_files_found with a secure temporary path (use tempfile.NamedTemporaryFile(delete=False) or the pytest tmp_path/tmp_path_factory fixture) and pass that path to analyze; after the assertion, perform cleanup safely by removing the file with Path.unlink(missing_ok=True) or wrapping os.unlink(output_file) in a try/except FileNotFoundError to avoid raising if the file was never created. Ensure references to the test function name test_no_files_found and the analyze(...) call remain so the change is localized.src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/guided-elicitation.md-64-64 (1)
64-64:⚠️ Potential issue | 🟡 MinorUse a deterministic early-draft threshold.
Line 64 says “fewer than 3-4 exchanges,” which is ambiguous and can branch differently across runs. Use one concrete threshold.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/guided-elicitation.md` at line 64, Replace the ambiguous phrase "fewer than 3-4 exchanges" in the sentence starting "If the user is providing complete, confident answers and you have solid coverage across all four topic areas..." with a single concrete threshold (e.g., "after 3 exchanges") so the prompt deterministically triggers an early draft offer; update any related phrasing to read "after 3 exchanges" (or your chosen single number) and verify any downstream logic or references to this line use the same numeric threshold for consistency.
🧹 Nitpick comments (17)
docs/how-to/project-context.md (1)
8-8: Path guidance is too vague and conflicts with the documented default locationThe suggested AGENTS/tools-context sentence uses
[path to project context]/project-context.md, while this guide later prescribes_bmad-output/project-context.md(Line 34). Users will copy the placeholder verbatim or point to inconsistent locations. Provide one concrete default string and then explicitly call out when it differs.Based on learnings: “Treat project-context.md as a singleton metadata file… use pattern
**/project-context.mdacross workflows.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/how-to/project-context.md` at line 8, Replace the vague placeholder sentence that suggests using `[path to project context]/project-context.md` with a concrete default reference `_bmad-output/project-context.md` and update the guidance to recommend using the glob pattern `**/project-context.md` across workflows; also add a short note saying "If your project stores the file elsewhere, update the path accordingly (e.g., AGENTS.md or tools context)". This change should touch the sentence that mentions adding the line to AGENTS/tools-context and the later line that prescribes `_bmad-output/project-context.md` so both use the same default and explicitly call out when to override it.docs/reference/core-tools.md (1)
54-55:bmad-brainstormingdescription makes hard outcome guarantees without guardrailsThis text promises 100+ ideas and frames it as where “magic happens,” but there’s no stated fallback if session constraints, user intent, or context limits prevent that. Rephrase as a target heuristic, not an expected outcome guarantee.
Also applies to: 74-76
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/reference/core-tools.md` around lines 54 - 55, Update the "bmad-brainstorming" description to remove the hard guarantee "guides you toward 100+ ideas" and rephrase it as a target or heuristic (e.g., "aims to generate up to 100 ideas" or "targets 100+ ideas where possible") and soften superlatives like "where magic happens" to neutral language (e.g., "to spark creative breakthroughs"); make the same change for the other instance referenced (lines 74–76) so both descriptions present the 100+ outcome as a goal dependent on context and constraints rather than a promise.src/core/skills/bmad-distillator/scripts/tests/test_analyze_sources.py (2)
120-155: Missing test coverage for same-filename collision across directories.The
suggest_groupstests don't cover the scenario where two input files have the same filename but reside in different directories (e.g.,dir1/brief.mdanddir2/brief.md). This relates to thefile_mapcollision issue identified inanalyze_sources.py.Add a test case:
def test_same_filename_different_dirs(self, temp_dir): """Verify grouping handles same-named files in different directories.""" sub1 = Path(temp_dir) / "sub1" sub2 = Path(temp_dir) / "sub2" sub1.mkdir() sub2.mkdir() (sub1 / "brief.md").write_text("Brief 1") (sub2 / "brief.md").write_text("Brief 2") files = [sub1 / "brief.md", sub2 / "brief.md"] groups = suggest_groups(files) # Both files should appear somewhere in groups all_paths = [f["path"] for g in groups for f in g["files"]] assert len(all_paths) == 2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/scripts/tests/test_analyze_sources.py` around lines 120 - 155, Add a new unit test to TestSuggestGroups that verifies suggest_groups correctly handles files with identical basenames in different directories (the collision in the file_map in analyze_sources.py); create two subdirectories under the temp_dir, write distinct files named "brief.md" in each, call suggest_groups with both paths, and assert both file paths appear in the returned groups (e.g., collect f["path"] for each group and assert length == 2) so we cover the same-filename/different-dir case.
11-13:sys.pathmanipulation is fragile for imports.Inserting the parent directory into
sys.pathat runtime works but is brittle. If the test file moves or the package structure changes, this breaks silently. Consider using a proper package structure with__init__.pyor pytest'spythonpathconfiguration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/scripts/tests/test_analyze_sources.py` around lines 11 - 13, The test mutates sys.path via sys.path.insert(0, str(Path(__file__).parent.parent)), which is brittle; remove that mutation and make the code importable via a proper package layout or test configuration. Replace the sys.path hack by ensuring the package root has an __init__.py and importing the target module by its package name (e.g., from mypackage.scripts.test_module import ...), or configure pytest to add the project root to sys.path using pytest.ini (add a pythonpath entry) or use pyproject/pytest's testpath settings; update the test import statements to use the package-qualified imports and delete the sys.path.insert(...) line.src/core/skills/bmad-distillator/resources/splitting-strategy.md (1)
39-47: Section distillate format doesn't specify frontmatter requirements.The distillate-format-reference.md shows YAML frontmatter with
type,sources,downstream_consumer, etc. for distillates. This splitting strategy describes section distillates having a "Context header (1 line)" but doesn't clarify whether section files need the same YAML frontmatter structure or just the prose header. This ambiguity could lead to inconsistent section file formats.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/resources/splitting-strategy.md` around lines 39 - 47, The "Produce Section Distillates" guidance is ambiguous about frontmatter; update the splitting-strategy.md under the "Produce Section Distillates" section to explicitly require that each section distillate include the same YAML frontmatter fields shown in distillate-format-reference.md (e.g., type, sources, downstream_consumer) and state which fields must be populated versus optional, while clarifying that the "Context header" is an additional one-line prose field inside the file; reference the headings "Context header" and the example frontmatter keys from distillate-format-reference.md so reviewers can locate and enforce the format for section files.src/core/skills/bmad-distillator/scripts/analyze_sources.py (2)
296-296:sys.exit(0)inmain()is redundant afteranalyze()completes.The function naturally returns after
analyze(). Explicitsys.exit(0)is unnecessary and could mask issues if the function is refactored to return values. Consider removing or moving exit logic to theif __name__block only for error cases.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/scripts/analyze_sources.py` at line 296, Remove the redundant sys.exit(0) call from the end of main() — the function will naturally return after analyze() — and if process exit handling is required, perform it only in the top-level guard (if __name__ == "__main__") where you can inspect analyze()'s return or caught exceptions and call sys.exit(nonzero) on errors; update references to main() and analyze() accordingly and ensure no other code depends on main() invoking sys.exit(0).
82-110: No handling of permission errors when accessing files/directories.
resolve_inputsdoesn't handlePermissionErrororOSErrorthat could occur when accessing protected directories or files. A single inaccessible file would crash the entire analysis.Proposed fix with error handling
def resolve_inputs(inputs: list[str]) -> list[Path]: """Resolve input arguments to a flat list of file paths.""" files: list[Path] = [] for inp in inputs: - path = Path(inp) - if path.is_file(): - files.append(path.resolve()) - elif path.is_dir(): - for root, dirs, filenames in os.walk(path): - dirs[:] = [d for d in dirs if d not in SKIP_DIRS] - for fn in sorted(filenames): - fp = Path(root) / fn - if fp.suffix.lower() in INCLUDE_EXTENSIONS: - files.append(fp.resolve()) + try: + path = Path(inp) + if path.is_file(): + files.append(path.resolve()) + elif path.is_dir(): + for root, dirs, filenames in os.walk(path): + dirs[:] = [d for d in dirs if d not in SKIP_DIRS] + for fn in sorted(filenames): + fp = Path(root) / fn + if fp.suffix.lower() in INCLUDE_EXTENSIONS: + files.append(fp.resolve()) + except (PermissionError, OSError): + continue # Skip inaccessible paths🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/scripts/analyze_sources.py` around lines 82 - 110, resolve_inputs can raise PermissionError/OSError when touching files or walking directories; wrap filesystem operations (checking path.is_file(), path.is_dir(), the os.walk loop, and per-file checks like mp.is_file()/fp.suffix) in try/except blocks that catch PermissionError and OSError, log a concise warning mentioning the offending path and exception, then continue so a single inaccessible file/dir is skipped rather than crashing the whole function; update resolve_inputs to perform these guarded checks while preserving the existing deduplication using the same SKIP_DIRS and INCLUDE_EXTENSIONS symbols.src/core/skills/bmad-distillator/resources/compression-rules.md (1)
21-22: Inconsistent decision/rationale compression formats.Line 21 shows decisions as
"Decision: X. Reason: Y"format, while line 35 shows"X (rationale: Y, Z)"format. These represent the same concept but with different syntax. An AI agent applying these rules may produce inconsistent output. Consider standardizing on one format.Proposed alignment
Either update line 21 to match line 35's parenthetical style:
-- Decisions made and their rationale (compressed: "Decision: X. Reason: Y") +- Decisions made and their rationale (compressed: "X (rationale: Y)")Or update line 35 to match line 21's explicit-label style:
-- "We decided to use X because Y and Z" → "X (rationale: Y, Z)" +- "We decided to use X because Y and Z" → "Decision: X. Reason: Y, Z"Also applies to: 35-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/resources/compression-rules.md` around lines 21 - 22, Standardize the decision/rationale compression format in compression-rules.md: pick one style and apply it consistently across the document by replacing instances of the alternate form — either convert all explicit-label entries like "Decision: X. Reason: Y" to the parenthetical style "X (rationale: Y, Z)" or convert the parenthetical forms to the explicit-label style; update every occurrence (search for the two example patterns "Decision: X. Reason: Y" and "X (rationale: Y, Z)") and ensure the chosen pattern is documented in the rules header so downstream AI agents produce consistent output.src/bmm/workflows/1-analysis/bmad-product-brief-preview/resources/brief-template.md (1)
60-60: Unexplained reference to "distillate" — may confuse template users.The guidance "that's what the distillate is for" assumes readers understand the distillate concept. Since this is a standalone template resource, consider either removing this reference or adding a brief parenthetical explanation, e.g., "(the compressed machine-readable companion document)".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/resources/brief-template.md` at line 60, The sentence containing "that's what the distillate is for" is unclear to standalone template users; either remove this clause or replace it with a brief parenthetical explanation like "(i.e., the compressed machine-readable companion document)" so readers understand what "distillate" refers to; locate the exact sentence string "The brief should be 1-2 pages. If it's longer, you're putting in too much detail — that's what the distillate is for." in brief-template.md and update it accordingly.tools/cli/external-official-modules.yaml (2)
49-50: Duplicate name and description text.The
nameanddescriptionfields contain identical text: "Whiteport Design Studio (For UX Professionals)". The description should provide additional context beyond the name.Proposed fix
name: "Whiteport Design Studio (For UX Professionals)" - description: "Whiteport Design Studio (For UX Professionals)" + description: "UX design agents and workflows for design professionals"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/cli/external-official-modules.yaml` around lines 49 - 50, The name and description fields currently duplicate each other ("Whiteport Design Studio (For UX Professionals)"); update the description field to provide additional context or details about the module (for example target users, key features, or purpose) rather than repeating the name—locate the YAML keys name and description in the same mapping and replace the description value with a concise, informative summary distinct from the name.
25-26: Inconsistent URL format —.gitsuffix only on this entry.Line 26 includes
.gitsuffix in the URL (bmad-module-game-dev-studio.git) while all other module URLs omit it. This inconsistency could cause issues if URL parsing or caching logic is sensitive to the suffix.Proposed fix
bmad-game-dev-studio: - url: https://github.com/bmad-code-org/bmad-module-game-dev-studio.git + url: https://github.com/bmad-code-org/bmad-module-game-dev-studio🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/cli/external-official-modules.yaml` around lines 25 - 26, The URL for the entry keyed by bmad-game-dev-studio includes a trailing `.git` which is inconsistent with other module URLs; update the URL value for bmad-game-dev-studio (the string "https://github.com/bmad-code-org/bmad-module-game-dev-studio.git") to match the other entries by removing the `.git` suffix so it becomes "https://github.com/bmad-code-org/bmad-module-game-dev-studio".src/core/skills/bmad-distillator/resources/distillate-format-reference.md (1)
113-227: Full example contains specific market data that will become outdated.The example distillate includes concrete numbers (market size projections, competitor stats like "83K skills, 20% trigger reliability") that will age. While acceptable for illustrative purposes, consider adding a comment noting these are fictional/illustrative values, or use obviously placeholder data to avoid the example being mistaken for current intelligence.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/resources/distillate-format-reference.md` around lines 113 - 227, The "Full Example" distillate contains hardcoded market and competitor numbers that will age; update the example in the section starting with "## Full Example" (and specifically entries under "Competitive Landscape" and market-size lines like "$7.84B (2025) → $52.62B (2030)", "83K skills", "351K+ skills", and percentage stats) to either annotate them as illustrative (add a clear NOTE: "illustrative/fictional data") or replace them with generic placeholders (e.g., "<market-size-placeholder>", "<skills-count-placeholder>") so readers don’t mistake them for current intelligence.src/core/skills/bmad-distillator/agents/distillate-compressor.md (1)
110-114:source_headingsis unbounded and fights the token-efficiency objective.Line 111 asks for all Level 2+ headings across sources. On large docs this can explode output size and dilute useful signal. Cap it or return a compact structure (e.g., deduped + truncated list).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/agents/distillate-compressor.md` around lines 110 - 114, The current schema field source_headings in distillate-compressor.md requests "all Level 2+ headings" which can produce unbounded, token-heavy output; modify the spec for source_headings to limit size by returning a deduplicated, truncated list (e.g., top N headings or first N after dedupe) and/or include a token-safe summary form (count + sample list + continuation token flag) instead of the full set; update the description near the distillate_content/source_headings/token_estimate block to describe the cap, dedupe behavior, and the maximum N or concise structure to use.src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/artifact-analyzer.md (1)
56-58: Raw preserved details need provenance fields for safe downstream reuse.Line 56-Line 58 emits free-form bullets without source path/section references. Distillation and review stages cannot verify or trace those details back to artifacts.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/agents/artifact-analyzer.md` around lines 56 - 58, The "raw_detail_worth_preserving" output currently emits free-form bullet strings; change it to emit structured provenance objects (e.g., replace the array-of-strings entry under "raw_detail_worth_preserving" with objects containing at least fields like "text" (the detail), "source_artifact" or "source_id", and "source_path" or "source_section") so downstream distillation/review can trace each detail to its artifact; update any producers/consumers referenced around "raw_detail_worth_preserving" to populate and expect this new object shape.src/core/skills/bmad-distillator/agents/round-trip-reconstructor.md (1)
40-43: Deterministic temp filenames can overwrite previous validation artifacts.Line 41-Line 43 always reuse
-reconstruction-{N}.md. Re-running validation on the same distillate will silently clobber prior outputs and erase traceability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/skills/bmad-distillator/agents/round-trip-reconstructor.md` around lines 40 - 43, The deterministic filename pattern that saves reconstructions as "{distillate-basename}-reconstruction-{N}.md" can clobber prior artifacts; change the saving logic that generates these filenames to produce unique, non-colliding names (e.g., append a timestamp or UUID or check existence and increment a counter before writing) so each reconstructed document adjacent to the distillate keeps its own trace (refer to the "{distillate-basename}-reconstruction-{N}.md" naming logic and the code path that writes the temporary reconstruction files).src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/draft-and-review.md (1)
70-83: Yolo mode behavior is inconsistent with one-pass intent.Line 70 groups Yolo with Guided and Line 82 introduces iterative refinement loops. That collapses mode differentiation and can add unintended interaction in Yolo runs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/draft-and-review.md` around lines 70 - 83, The "Yolo and Guided modes" section currently conflates Yolo with Guided by describing iterative reviewer loops; separate behaviors so Yolo enforces one-pass intent: in the "Yolo and Guided modes" text (the "Yolo and Guided modes:" heading and the quoted reviewer text) update Yolo to present the draft plus a concise set of only the most critical reviewer findings and a single-choice prompt (e.g., "Accept, minor tweak, or restart?") with no iterative "Iterate" or "anything else, or are we happy with this?" soft-gate; keep the iterative refinement flow only for Guided mode and ensure the quoted prompts and the "Iterate" paragraph explicitly apply to Guided, not Yolo.src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/contextual-discovery.md (1)
55-57: Stage output is ephemeral; no durable handoff artifact is defined.Line 55-Line 57 route to next prompts but never persist a compact “context pack.” In longer runs this increases drift and repeated work. Persisting a small structured summary would stabilize Stage 3/4 behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/prompts/contextual-discovery.md` around lines 55 - 57, The stage currently routes to prompts/guided-elicitation.md or prompts/draft-and-review.md without persisting a compact "context pack", causing drift; modify the contextual-discovery stage (contextual-discovery.md) to build and persist a small structured summary (e.g., key findings, user context, timestamps, confidence) into the workflow run state under a stable key like "context_pack", and update the downstream prompt entry points (prompts/guided-elicitation.md and prompts/draft-and-review.md) to accept and load that "context_pack" from run state as the canonical context input so subsequent stages use the persisted summary instead of re-scanning or relying only on ephemeral inline data.
| "description": "Produces executive product brief and optional LLM distillate for PRD input.", | ||
| "supports-headless": true, | ||
| "phase-name": "1-analysis", | ||
| "after": ["brainstorming, perform-research"], |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify how other bmm manifests encode multi-item "after" dependencies.
rg -nP '"after"\s*:\s*\[' --type=json -C2
rg -nP '"after"\s*:\s*\[\s*"[^"]+,\s*[^"]+"\s*\]' --type=json -C2Repository: bmad-code-org/BMAD-METHOD
Length of output: 1202
🏁 Script executed:
# Find all bmm manifest files and examine their "after" definitions
fd -e 'bmad-manifest.json' -x cat {} + | grep -A1 -B1 '"after"'Repository: bmad-code-org/BMAD-METHOD
Length of output: 51
🏁 Script executed:
# Search for the correct pattern (array with multiple string items, no commas)
rg -U '"after"\s*:\s*\[.*\]' --type=json src/bmm/ -A1 -B1Repository: bmad-code-org/BMAD-METHOD
Length of output: 411
🏁 Script executed:
# Look for manifest parsing/validation code to understand expected format
fd -type f \( -name '*.js' -o -name '*.ts' \) | xargs grep -l 'after.*depend' | head -5Repository: bmad-code-org/BMAD-METHOD
Length of output: 239
🏁 Script executed:
# Find all bmad-manifest.json files
find src/bmm -name 'bmad-manifest.json' -type fRepository: bmad-code-org/BMAD-METHOD
Length of output: 143
🏁 Script executed:
# Extract all "after" field values from manifest files
find src/bmm -name 'bmad-manifest.json' -type f -exec grep -h '"after"' {} \;Repository: bmad-code-org/BMAD-METHOD
Length of output: 120
🏁 Script executed:
# Search for code that parses manifest files or validates dependencies
rg -i 'manifest|after.*depend' --type=js --type=ts src/ | grep -E '(parse|depend|after)' | head -20Repository: bmad-code-org/BMAD-METHOD
Length of output: 51
🏁 Script executed:
# Search for documentation about manifest format
find . -name 'README*' -o -name '*.md' | xargs grep -l 'manifest\|after.*depend' 2>/dev/null | head -10Repository: bmad-code-org/BMAD-METHOD
Length of output: 586
🏁 Script executed:
# Find code that loads or imports the manifest file
rg 'bmad-manifest|manifest\.json' --type=js --type=ts --type=jsonRepository: bmad-code-org/BMAD-METHOD
Length of output: 51
🏁 Script executed:
# Look for workflow engine or scheduler code
rg -i 'workflow|scheduler|after.*before.*depend' --type=js --type=ts src/ -l | head -10Repository: bmad-code-org/BMAD-METHOD
Length of output: 51
🏁 Script executed:
# Read the actual manifest file
cat src/bmm/workflows/1-analysis/bmad-product-brief-preview/bmad-manifest.jsonRepository: bmad-code-org/BMAD-METHOD
Length of output: 562
🏁 Script executed:
# Search for "brainstorming" and "perform-research" references
rg 'brainstorming|perform-research' -t json -t js -t ts -t mdRepository: bmad-code-org/BMAD-METHOD
Length of output: 18241
🏁 Script executed:
# Look for documentation about manifest format in docs
rg -i 'manifest.*format\|after.*before' docs/ | head -20Repository: bmad-code-org/BMAD-METHOD
Length of output: 51
after dependency list uses comma-separated string instead of separate array items.
Line 11 contains a single array element with comma-separated values: ["brainstorming, perform-research"]. Correct format per manifest documentation should be two separate items: ["brainstorming", "perform-research"].
Dependency resolution will fail because the scheduler cannot match the literal string "brainstorming, perform-research" against valid capability names.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bmm/workflows/1-analysis/bmad-product-brief-preview/bmad-manifest.json`
at line 11, The manifest's "after" dependency is a single string containing
comma-separated values ("brainstorming, perform-research") which prevents the
scheduler from resolving dependencies; fix the "after" array in
bmad-manifest.json by replacing the single element with two separate items:
"brainstorming" and "perform-research" (i.e., change the "after" value from
["brainstorming, perform-research"] to ["brainstorming", "perform-research"]).
The party-mode workflow moved from core/workflows/ to core/skills/ in PR #1959, breaking validate:refs. Convert all raw path references to skill:bmad-party-mode to match the established convention. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Party-mode moved from core/workflows/ to core/skills/ in PR #1959, breaking 11 file references. Convert all to skill:bmad-party-mode matching the convention used by skill:bmad-advanced-elicitation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
What
Prototype of some new ideas for early preview of the newer skill optimized create product brief skill, with newer patterns.