chore(core): convert help.md to native skill directory#1874
chore(core): convert help.md to native skill directory#1874alexeyv merged 3 commits intobmad-code-org:mainfrom
Conversation
16e81b4 to
2c03ba2
Compare
|
@coderabbitai review |
🤖 Augment PR SummarySummary: This PR migrates the legacy single-file Changes:
Technical Notes: Aligns the help task with the same discovery/installation mechanism used by other skill directories (manifest + 🤖 Was this summary useful? React with 👍 or 👎 |
📝 WalkthroughWalkthroughReplaces help document file path references with a Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md (5)
75-79:⚠️ Potential issue | 🟠 MajorMissing criteria for readiness status and critical issue classification.
The template provides three readiness statuses (READY/NEEDS WORK/NOT READY) and a "Critical Issues" section without any guidance on:
- What constitutes a "critical" issue versus a standard issue
- What threshold or conditions determine each readiness status
- How to classify severity consistently
While hard numeric thresholds should be avoided per project guidelines, the complete absence of qualitative guidance leaves the determination entirely subjective and non-repeatable.
Suggested guidance
Add before line 75:
Determine readiness status based on: - READY: No critical issues; minor issues are documented but don't block implementation - NEEDS WORK: One or more critical issues found that should be addressed before starting implementation - NOT READY: Multiple critical issues or fundamental gaps that require significant rework Consider an issue "critical" if it would likely cause implementation failure, major rework, or significant user impact if not addressed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md` around lines 75 - 79, The readiness template (section showing "READY/NEEDS WORK/NOT READY" and the "Critical Issues Requiring Immediate Action" block) lacks qualitative guidance for classifying readiness and what counts as a "critical" issue; update step-06-final-assessment.md to add a short guidance paragraph before the status line that defines the three statuses (READY, NEEDS WORK, NOT READY) with qualitative criteria and a clear rule for what constitutes a "critical" issue (e.g., causes implementation failure, major rework, or significant user impact), and add a one-line rubric for consistently classifying severity to the "Critical Issues Requiring Immediate Action" section so reviewers can apply it repeatably.
60-62:⚠️ Potential issue | 🟠 MajorMissing validation that previous steps completed successfully.
The instruction to "Check the {outputFile} for sections added by previous steps" assumes:
- The outputFile exists
- Previous steps completed successfully
- Required sections were written
If any previous step failed or was skipped, this step will fail without a clear error message. According to the micro-file design learning, step files should be self-contained and robust.
Suggested safeguard
Add validation before line 60:
First, verify {outputFile} exists and contains sections from previous steps. If the file is missing or incomplete, HALT and inform the user that previous assessment steps must complete before final assessment can proceed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md` around lines 60 - 62, The step-06-final-assessment.md content assumes {outputFile} exists and prior steps succeeded; add a pre-check at the start of this step to validate that {outputFile} exists and contains the required sections produced by earlier steps (e.g., "File and FR Validation findings"); if the file is missing or the expected sections are absent, HALT the step and emit a clear error instructing the user to run or rerun the previous assessment steps before proceeding. Ensure this validation is self-contained in step-06-final-assessment.md and references "{outputFile}" and the expected section headers so the step fails fast with a helpful message rather than proceeding and producing unclear errors.
89-89:⚠️ Potential issue | 🟠 MajorAmbiguous placeholder replacement instructions.
The template contains placeholders
[X]and[Y]but provides no explicit instruction to the agent to count issues and replace these values. While the agent may infer this requirement, the absence of explicit direction risks:
- Literal placeholder text appearing in the final report
- Inconsistent counting methodology
- No validation that the counts are accurate
Suggested improvement
Add explicit instruction before line 89:
Count total issues from all previous sections and replace [X] with the total count and [Y] with the number of categories (typically 3: File/FR Validation, UX Alignment, Epic Quality).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md` at line 89, The template uses ambiguous placeholders [X] and [Y]; update the document (near the sentence containing "This assessment identified [X] issues across [Y] categories.") to add an explicit instruction that the agent should count all issues from previous sections and replace [X] with the total issue count and [Y] with the number of categories (e.g., 3: File/FR Validation, UX Alignment, Epic Quality), and also require a validation step to ensure the numeric replacements match the counted items before finalizing the report.
104-104:⚠️ Potential issue | 🟡 MinorDisplay message shows variable pattern instead of resolved path.
The display message "Report generated: {outputFile}" will show the variable pattern rather than the actual resolved file path. Users need the concrete path to locate and open the report.
Suggested improvement
-Report generated: {outputFile} +Report generated at: [actual resolved path]Or add instruction to agent:
Display the fully resolved path (e.g., `_bmad/planning_artifacts/implementation-readiness-report-2026-03-09.md`) rather than the {outputFile} variable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md` at line 104, The message currently prints the literal pattern "Report generated: {outputFile}" instead of the resolved path; update the template to interpolate or insert the actual outputFile value (the variable named outputFile) so the user sees the concrete file path (for example "_bmad/planning_artifacts/implementation-readiness-report-2026-03-09.md") rather than the brace-delimited placeholder; locate the string "Report generated: {outputFile}" in step-06-final-assessment.md and change it to render outputFile (or call the function/formatter that resolves outputFile) using the project's templating/formatting convention so the fully resolved path is displayed to the user.
22-22:⚠️ Potential issue | 🟡 MinorGrammar and capitalization issue in communication rule.
The phrase "SPEAK OUTPUT In your Agent communication style" has inconsistent capitalization (capital "In" mid-sentence) and awkward phrasing. This degrades the quality of the agent prompt.
Suggested improvement
-✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` +✅ YOU MUST ALWAYS PROVIDE OUTPUT in your Agent communication style using the config `{communication_language}`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md` at line 22, Update the problematic prompt sentence "✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`" to correct capitalization and improve phrasing: change "In" to lowercase and rephrase for clarity, e.g. "✅ You must always produce output in your agent communication style using the config `{communication_language}`" (or similar concise wording), ensuring consistent sentence case and clear intent in step-06-final-assessment.md.
♻️ Duplicate comments (1)
src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md (1)
106-106:⚠️ Potential issue | 🟡 MinorDuplicate placeholder ambiguity in display message.
Similar to the issue at line 89, the display message uses
[number]without explicit instruction for replacement. The agent must infer it should count issues across all findings sections.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md` at line 106, The display message contains an ambiguous placeholder "[number]" that must be replaced with a computed total of issues across all findings sections; update the template in step-06-final-assessment.md to explicitly compute and inject the sum (e.g., totalIssues) instead of leaving "[number]" raw, ensuring the code or rendering logic that builds this message aggregates counts from all findings sections and substitutes the computed total into the message text.
🧹 Nitpick comments (3)
src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md (3)
87-89: Template section naming inconsistency.The template sections use descriptive, action-oriented headers like "Overall Readiness Status" and "Critical Issues Requiring Immediate Action", but then switches to the generic "Final Note" at line 87.
For consistency, consider a more descriptive header such as "Assessment Summary" or "Implementation Guidance" that matches the style and specificity of the other section headers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md` around lines 87 - 89, Rename the generic "Final Note" header to a descriptive, action-oriented title (e.g., "Assessment Summary" or "Implementation Guidance") to match the style of other sections; update the header text in the step-06-final-assessment.md file where the "Final Note" heading appears and adjust any in-file cross-references or anchors that rely on that heading so links and table of contents remain correct.
110-112: Style inconsistency in workflow completion section.Line 110 uses a formal completion statement ("The implementation readiness workflow is now complete.") while line 112 uses a terse declarative + command ("Implementation Readiness complete. Run
/bmad-help").The stylistic shift is jarring. Consider aligning both to the same formal or informal tone.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md` around lines 110 - 112, The two closing sentences are stylistically inconsistent: update the terse line ("Implementation Readiness complete. Run `/bmad-help`") to match the formal tone of the preceding sentence by rewriting it as a complete, formal sentence (for example: "The Implementation Readiness workflow is complete. To get help, run `/bmad-help`.") so both lines use the same register; locate and edit the second sentence in step-06-final-assessment.md to apply this change.
38-44: No error handling for execution protocol failures.The execution protocols section lists actions but provides no error handling for common failure scenarios:
- File I/O errors when appending to {outputFile}
- Empty or invalid findings from previous steps
- Inability to determine readiness status
Adding HALT conditions or graceful degradation paths would make the step more robust. Consider adding error handling guidance such as "If {outputFile} cannot be written, HALT and inform user of the file permission issue."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md` around lines 38 - 44, The EXECUTION PROTOCOLS section lacks error handling for failures when appending to {outputFile}, processing findings, or determining readiness; update the step-06-final-assessment content to explicitly validate inputs (ensure findings from previous steps are non-empty and well-formed), wrap the {outputFile} write/append operation with a clear failure path that logs the permission/IO error and HALTs (or falls back to a safe alternative), and add a defined path for “unable to determine readiness” that records indeterminate status, recommends next actions, and notifies the user; reference the EXECUTION PROTOCOLS header and the {outputFile} placeholder so reviewers can locate and implement these checks and messages in the step content.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/bmm/workflows/2-plan-workflows/create-prd/steps-c/step-12-complete.md`:
- Line 90: The final step file step-12-complete.md currently ends with "PRD
complete. Run `/bmad-help`" which removes the required guidance and conflicts
with the "do not load additional steps" rule; replace that single-line prompt
with a concise list of suggested next workflows (e.g., "Next workflows:
design-review, implementation-plan, stakeholder-review") plus one short
instruction to consult the /bmad-help command only if they need general
assistance—keep the phrasing in this file self-contained and actionable (do not
instruct the agent to load or run additional workflow steps), and ensure the
file explicitly presents the next workflow options rather than only pointing to
/bmad-help.
In
`@src/bmm/workflows/3-solutioning/create-architecture/steps/step-08-complete.md`:
- Line 44: Replace the immediate imperative "Architecture complete. Run
`/bmad-help`" in the "Next Steps Guidance" section with a user-facing suggestion
that preserves the architecture-specific recommendation rather than invoking the
skill; for example, change the text to a neutral prompt like "Architecture
complete. For more help with next steps, you can run `/bmad-help` or follow the
architecture-specific recommendations above." Ensure you update the step content
in the step-08-complete.md prompt so it offers the `/bmad-help` option rather
than executing it.
---
Outside diff comments:
In
`@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md`:
- Around line 75-79: The readiness template (section showing "READY/NEEDS
WORK/NOT READY" and the "Critical Issues Requiring Immediate Action" block)
lacks qualitative guidance for classifying readiness and what counts as a
"critical" issue; update step-06-final-assessment.md to add a short guidance
paragraph before the status line that defines the three statuses (READY, NEEDS
WORK, NOT READY) with qualitative criteria and a clear rule for what constitutes
a "critical" issue (e.g., causes implementation failure, major rework, or
significant user impact), and add a one-line rubric for consistently classifying
severity to the "Critical Issues Requiring Immediate Action" section so
reviewers can apply it repeatably.
- Around line 60-62: The step-06-final-assessment.md content assumes
{outputFile} exists and prior steps succeeded; add a pre-check at the start of
this step to validate that {outputFile} exists and contains the required
sections produced by earlier steps (e.g., "File and FR Validation findings"); if
the file is missing or the expected sections are absent, HALT the step and emit
a clear error instructing the user to run or rerun the previous assessment steps
before proceeding. Ensure this validation is self-contained in
step-06-final-assessment.md and references "{outputFile}" and the expected
section headers so the step fails fast with a helpful message rather than
proceeding and producing unclear errors.
- Line 89: The template uses ambiguous placeholders [X] and [Y]; update the
document (near the sentence containing "This assessment identified [X] issues
across [Y] categories.") to add an explicit instruction that the agent should
count all issues from previous sections and replace [X] with the total issue
count and [Y] with the number of categories (e.g., 3: File/FR Validation, UX
Alignment, Epic Quality), and also require a validation step to ensure the
numeric replacements match the counted items before finalizing the report.
- Line 104: The message currently prints the literal pattern "Report generated:
{outputFile}" instead of the resolved path; update the template to interpolate
or insert the actual outputFile value (the variable named outputFile) so the
user sees the concrete file path (for example
"_bmad/planning_artifacts/implementation-readiness-report-2026-03-09.md") rather
than the brace-delimited placeholder; locate the string "Report generated:
{outputFile}" in step-06-final-assessment.md and change it to render outputFile
(or call the function/formatter that resolves outputFile) using the project's
templating/formatting convention so the fully resolved path is displayed to the
user.
- Line 22: Update the problematic prompt sentence "✅ YOU MUST ALWAYS SPEAK
OUTPUT In your Agent communication style with the config
`{communication_language}`" to correct capitalization and improve phrasing:
change "In" to lowercase and rephrase for clarity, e.g. "✅ You must always
produce output in your agent communication style using the config
`{communication_language}`" (or similar concise wording), ensuring consistent
sentence case and clear intent in step-06-final-assessment.md.
---
Duplicate comments:
In
`@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md`:
- Line 106: The display message contains an ambiguous placeholder "[number]"
that must be replaced with a computed total of issues across all findings
sections; update the template in step-06-final-assessment.md to explicitly
compute and inject the sum (e.g., totalIssues) instead of leaving "[number]"
raw, ensuring the code or rendering logic that builds this message aggregates
counts from all findings sections and substitutes the computed total into the
message text.
---
Nitpick comments:
In
`@src/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.md`:
- Around line 87-89: Rename the generic "Final Note" header to a descriptive,
action-oriented title (e.g., "Assessment Summary" or "Implementation Guidance")
to match the style of other sections; update the header text in the
step-06-final-assessment.md file where the "Final Note" heading appears and
adjust any in-file cross-references or anchors that rely on that heading so
links and table of contents remain correct.
- Around line 110-112: The two closing sentences are stylistically inconsistent:
update the terse line ("Implementation Readiness complete. Run `/bmad-help`") to
match the formal tone of the preceding sentence by rewriting it as a complete,
formal sentence (for example: "The Implementation Readiness workflow is
complete. To get help, run `/bmad-help`.") so both lines use the same register;
locate and edit the second sentence in step-06-final-assessment.md to apply this
change.
- Around line 38-44: The EXECUTION PROTOCOLS section lacks error handling for
failures when appending to {outputFile}, processing findings, or determining
readiness; update the step-06-final-assessment content to explicitly validate
inputs (ensure findings from previous steps are non-empty and well-formed), wrap
the {outputFile} write/append operation with a clear failure path that logs the
permission/IO error and HALTs (or falls back to a safe alternative), and add a
defined path for “unable to determine readiness” that records indeterminate
status, recommends next actions, and notifies the user; reference the EXECUTION
PROTOCOLS header and the {outputFile} placeholder so reviewers can locate and
implement these checks and messages in the step content.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d63fc9b8-65f5-411f-a640-8226a3fdb389
⛔ Files ignored due to path filters (1)
src/core/module-help.csvis excluded by!**/*.csv
📒 Files selected for processing (11)
src/bmm/workflows/1-analysis/create-product-brief/steps/step-06-complete.mdsrc/bmm/workflows/2-plan-workflows/create-prd/steps-c/step-12-complete.mdsrc/bmm/workflows/2-plan-workflows/create-prd/steps-v/step-v-13-report-complete.mdsrc/bmm/workflows/2-plan-workflows/create-ux-design/steps/step-14-complete.mdsrc/bmm/workflows/3-solutioning/check-implementation-readiness/steps/step-06-final-assessment.mdsrc/bmm/workflows/3-solutioning/create-architecture/steps/step-08-complete.mdsrc/bmm/workflows/3-solutioning/create-epics-and-stories/steps/step-04-final-validation.mdsrc/core/tasks/bmad-help/SKILL.mdsrc/core/tasks/bmad-help/bmad-skill-manifest.yamlsrc/core/tasks/bmad-help/workflow.mdsrc/core/tasks/bmad-skill-manifest.yaml
💤 Files with no reviewable changes (2)
- src/core/tasks/bmad-skill-manifest.yaml
- src/core/tasks/bmad-help/workflow.md
src/bmm/workflows/2-plan-workflows/create-prd/steps-c/step-12-complete.md
Outdated
Show resolved
Hide resolved
src/bmm/workflows/3-solutioning/create-architecture/steps/step-08-complete.md
Outdated
Show resolved
Hide resolved
Migrate the single-file help.md task to a bmad-help/ skill directory following the pattern established by bmad-review-adversarial-general. Update module-help.csv to use skill: reference and remove the entry from the parent manifest. Fix 7 BMM workflow step files that had hardcoded file path references to the now-relocated help task. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
b0d30b2 to
6d43185
Compare
…#1874) * chore(core): convert help.md to native skill directory Migrate the single-file help.md task to a bmad-help/ skill directory following the pattern established by bmad-review-adversarial-general. Update module-help.csv to use skill: reference and remove the entry from the parent manifest. Fix 7 BMM workflow step files that had hardcoded file path references to the now-relocated help task. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(prompts): invoke bmad-help as a skill * style(prompts): format bmad-master agent yaml --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Summary
help.mdtask tobmad-help/skill directory (SKILL.md + workflow.md + bmad-skill-manifest.yaml)module-help.csvworkflow-file reference toskill:bmad-helphelp.mdentry from parentbmad-skill-manifest.yamlhelp.mdpath references →/bmad-helpcommandTest plan
bmad-cli.js installand verifybmad-helpskill appears in.claude/skills//bmad-helpand verify identical behavior to oldhelp.mdtaskgrep -r "tasks/help.md" src/)🤖 Generated with Claude Code