refactor: convert workflow.yaml to workflow.md (steps 1-3)#1842
refactor: convert workflow.yaml to workflow.md (steps 1-3)#1842alexeyv merged 8 commits intobmad-code-org:mainfrom
Conversation
|
@coderabbitai review |
🤖 Augment PR SummarySummary: This PR begins converting BMAD workflows from the legacy two-file format (a YAML config + separate instructions) into a single, self-contained Changes:
Technical Notes: The new markdown workflows use YAML frontmatter ( 🤖 Was this summary useful? React with 👍 or 👎 |
📝 WalkthroughWalkthroughThe PR migrates multiple workflows from YAML-based configuration to Markdown format. It updates agent configurations to reference Markdown workflows, converts existing YAML workflow definitions to Markdown with expanded initialization and configuration loading sections, and includes comprehensive test documentation comparing old and new formats across different LLM models. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/bmm/workflows/4-implementation/retrospective/workflow.md (1)
1449-1451:⚠️ Potential issue | 🟡 MinorNon-standard Handlebars comparison syntax.
Line 1449 uses
{{#ifsignificant_discovery_count > 0}}which includes a comparison operator. Standard Handlebars{{#if}}only checks truthiness, not comparisons.This will either fail to parse or always evaluate the condition incorrectly.
Proposed fix
Use a helper or pre-computed boolean:
-{{`#if` significant_discovery_count > 0}} +{{`#if` has_significant_discoveries}}Or rely on truthiness (non-zero is truthy in many template engines):
-{{`#if` significant_discovery_count > 0}} +{{`#if` significant_discovery_count}}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/4-implementation/retrospective/workflow.md` around lines 1449 - 1451, The template uses a non-standard comparison in the Handlebars block "{{`#if` significant_discovery_count > 0}}"; replace this with a valid construct by either precomputing a boolean (e.g., set a context property like "has_significant_discoveries" where has_significant_discoveries = significant_discovery_count > 0 and then use "{{`#if` has_significant_discoveries}}") or register/use a Handlebars helper (e.g., "gt" or "compare") and call it properly (e.g., "{{`#if` (gt significant_discovery_count 0)}}"); keep the surrounding usage of "next_epic_num" unchanged.src/bmm/workflows/4-implementation/sprint-status/workflow.md (1)
119-127:⚠️ Potential issue | 🟡 MinorMalformed XML structure - step closing tag placement.
Line 127's
</step>appears to close step n="2" but the indentation and content suggest it should be placed after the risk detection actions. The current structure may cause parsing issues if the workflow engine validates XML.The
<action>Detect risks:</action>block at line 119-126 appears to be inside step 2 but the closing tag position is ambiguous.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/4-implementation/sprint-status/workflow.md` around lines 119 - 127, The XML step closing tag is misplaced relative to the risk-detection actions: move the </step> so that it closes after the entire <action>Detect risks:</action> block (i.e., ensure all IF/... bullet lines for risk detection are nested inside the same step), so step n="2" properly contains the detect-risks content; then validate the XML structure (matching opening <step n="2"> and its closing </step>) and adjust any indentation/nesting around the <action>Detect risks:</action> block to prevent parsing errors.src/bmm/workflows/qa-generate-e2e-tests/workflow.md (1)
82-86:⚠️ Potential issue | 🟡 MinorIncomplete error handling for test failures.
Step 4 instructs "If failures occur, fix them immediately" but provides no guidance on:
- What if the fix introduces new failures?
- Maximum retry attempts before escalating to user
- How to handle unfixable failures (e.g., environment issues)
This could lead to infinite loops or abandoned workflows.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/qa-generate-e2e-tests/workflow.md` around lines 82 - 86, Update "Step 4: Run Tests" to include concrete failure-handling rules: define a max retry count (e.g., 3) with exponential/backoff between attempts, detect recurring or environment-specific failures and mark them as "unfixable" after retries, and specify an escalation path (e.g., notify user/raise an issue with logs and CI artifacts) instead of retrying indefinitely; also add guidance for what to do when a fix introduces new failures (revert to last green commit or create a diagnostic ticket) and ensure all outcomes are logged and break the workflow on escalation.
🧹 Nitpick comments (4)
src/bmm/workflows/4-implementation/retrospective/workflow.md (2)
1366-1366: Unspecified date format in filename.Line 1366 uses
{date}in the filename patternepic-{{epic_number}}-retro-{date}.md, but the date format is not specified in the initialization section (line 40 just says "system-generated current datetime").Different date formats could produce:
epic-1-retro-2026-03-07.md(ISO)epic-1-retro-03-07-2026.md(US)epic-1-retro-1709830800.md(Unix timestamp)Consider specifying the expected format.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/4-implementation/retrospective/workflow.md` at line 1366, The filename pattern in the Set filename action ("Set filename: {implementation_artifacts}/epic-{{epic_number}}-retro-{date}.md") uses an ambiguous {date} produced by the initialization step that currently says "system-generated current datetime"; change this to a specific, documented format (e.g., ISO 8601 "YYYY-MM-DD" or a Unix timestamp) and update the code that populates {date} to format the datetime accordingly so filenames are deterministic and consistent across runs.
50-56: Complex input file patterns may fail silently.The input file patterns use wildcards (
*epic*.md,*architecture*/*.md) with multiple fallback strategies. If patterns match zero files or multiple unexpected files, there's no explicit error handling or user notification.Consider adding validation that confirms at least one file matched for required inputs (epics, architecture, prd).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/4-implementation/retrospective/workflow.md` around lines 50 - 56, The table of input patterns (keys: epics, previous_retrospective, architecture, prd, document_project) can silently match zero or many files; add validation in the input resolution path (e.g., a function like validateInputPatterns or the code that implements SELECTIVE_LOAD/FULL_LOAD/INDEX_GUIDED) that resolves each glob, asserts required inputs (epics, architecture, prd) have at least one matched file, and emits a clear error or warning (fail the workflow for required inputs, warn/skip for optional ones like previous_retrospective/document_project) including the input key and the expanded file list so users know which pattern failed.src/bmm/workflows/qa-generate-e2e-tests/workflow.md (1)
141-143: Missing checklist existence verification.Line 143 references
{checklist}for validation, but the workflow never verifies this file exists. Ifchecklist.mdis missing from the installed workflow directory, the validation step will fail silently or produce confusing errors.Consider adding a check in INITIALIZATION or EXECUTION that the checklist file is accessible.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/qa-generate-e2e-tests/workflow.md` around lines 141 - 143, The workflow references `{checklist}` but never verifies the file exists; add a presence check during INITIALIZATION (or at start of EXECUTION) that resolves the `{checklist}` path and asserts accessibility, e.g., attempt to stat/read `{checklist}` and if missing emit a clear error message and fail fast (or optionally create/seed a default checklist), ensuring the validation step that writes "Validate against `{checklist}`" won't run against a non-existent file; update the workflow.md INITIALIZATION/EXECUTION section to perform this check and produce a descriptive error before proceeding.src/bmm/workflows/4-implementation/sprint-status/workflow.md (1)
18-24: Inconsistent variable reference syntax.The configuration loading section mixes syntax styles:
- Lines 20-23: Use backticks without braces for variable names (documentation style)
- Line 24: Uses single braces
{communication_language}(runtime interpolation style)This inconsistency may confuse agents about which values are literal documentation vs. runtime-interpolated. Consider normalizing to one style throughout the INITIALIZATION section.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/bmm/workflows/4-implementation/sprint-status/workflow.md` around lines 18 - 24, Normalize variable reference syntax in the INITIALIZATION section by choosing one style and applying it consistently: replace the mixed usage of backticks (project_name, user_name, communication_language, document_output_language, implementation_artifacts, date) and the runtime-interpolation `{communication_language}` with a single convention—prefer using runtime-style braces for all runtime values (e.g., {project_name}, {user_name}, {communication_language}, {document_output_language}, {implementation_artifacts}, {date}) or else use backticks for all as pure documentation; also update the instruction line "YOU MUST ALWAYS SPEAK OUTPUT..." to match that chosen style so agents clearly understand these are runtime-config variables.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@SPRINT_STATUS_CONVERSION_TEST_RESULTS.md`:
- Around line 20-24: The test fixtures referenced ("14 fixture YAML files") are
missing, making reproduction impossible; add the 14 fixture YAML files either
inline in SPRINT_STATUS_CONVERSION_TEST_RESULTS.md (where the diff references
them) or place them under a test/fixtures/ directory in the repo and update the
document to point to that location, ensuring filenames and contents match the
fixture definitions referenced in the original session and that the document
clearly lists each fixture name and a brief description for clarity.
- Around line 14-16: The test results file contains hardcoded absolute developer
paths in the ls commands (the two /Users/alex/... entries) which break
reproducibility; replace those absolute paths in
SPRINT_STATUS_CONVERSION_TEST_RESULTS.md with relative/project-root paths or
environment variables (e.g., ./src/bmm/workflows/... or
$PROJECT_ROOT/src/bmm/workflows/...) and update the ls invocations to use the
chosen relative or env-based reference so other contributors can run the same
commands without exposing your local filesystem.
In `@src/bmm/workflows/4-implementation/retrospective/workflow.md`:
- Line 11: The workflow.md has a rule "No time estimates" but the dialogue later
includes hourly estimates from "Charlie (Senior Dev)" at the instances showing
"1. {{tech_prep_item_1}} - estimated {{hours_1}} hours" and "2.
{{tech_prep_item_2}} - estimated {{hours_2}} hours"; fix by removing numeric
hour references or converting those lines to effort-based phrasing (e.g., "small
effort", "moderate effort", "significant effort") so the Charlie dialogue
complies with the rule, updating the placeholders ({{hours_1}}, {{hours_2}}) to
non-time descriptors or removing them entirely.
In `@src/bmm/workflows/qa-generate-e2e-tests/workflow.md`:
- Around line 30-32: The workflow's default_output_file (`default_output_file =
{implementation_artifacts}/tests/test-summary.md`) assumes a nested tests folder
that may not exist; either ensure the directory exists before writing or
simplify the path to avoid a subfolder. Update the workflow to either (a) add a
step/action that creates the `{implementation_artifacts}/tests` directory (e.g.,
mkdir -p) before the artifact write, or (b) change `default_output_file` to
`{implementation_artifacts}/test-summary.md` to match the sprint-status pattern;
reference `default_output_file`, `implementation_artifacts`, and existing
`test_dir`/`source_dir` when making the change.
---
Outside diff comments:
In `@src/bmm/workflows/4-implementation/retrospective/workflow.md`:
- Around line 1449-1451: The template uses a non-standard comparison in the
Handlebars block "{{`#if` significant_discovery_count > 0}}"; replace this with a
valid construct by either precomputing a boolean (e.g., set a context property
like "has_significant_discoveries" where has_significant_discoveries =
significant_discovery_count > 0 and then use "{{`#if`
has_significant_discoveries}}") or register/use a Handlebars helper (e.g., "gt"
or "compare") and call it properly (e.g., "{{`#if` (gt significant_discovery_count
0)}}"); keep the surrounding usage of "next_epic_num" unchanged.
In `@src/bmm/workflows/4-implementation/sprint-status/workflow.md`:
- Around line 119-127: The XML step closing tag is misplaced relative to the
risk-detection actions: move the </step> so that it closes after the entire
<action>Detect risks:</action> block (i.e., ensure all IF/... bullet lines for
risk detection are nested inside the same step), so step n="2" properly contains
the detect-risks content; then validate the XML structure (matching opening
<step n="2"> and its closing </step>) and adjust any indentation/nesting around
the <action>Detect risks:</action> block to prevent parsing errors.
In `@src/bmm/workflows/qa-generate-e2e-tests/workflow.md`:
- Around line 82-86: Update "Step 4: Run Tests" to include concrete
failure-handling rules: define a max retry count (e.g., 3) with
exponential/backoff between attempts, detect recurring or environment-specific
failures and mark them as "unfixable" after retries, and specify an escalation
path (e.g., notify user/raise an issue with logs and CI artifacts) instead of
retrying indefinitely; also add guidance for what to do when a fix introduces
new failures (revert to last green commit or create a diagnostic ticket) and
ensure all outcomes are logged and break the workflow on escalation.
---
Nitpick comments:
In `@src/bmm/workflows/4-implementation/retrospective/workflow.md`:
- Line 1366: The filename pattern in the Set filename action ("Set filename:
{implementation_artifacts}/epic-{{epic_number}}-retro-{date}.md") uses an
ambiguous {date} produced by the initialization step that currently says
"system-generated current datetime"; change this to a specific, documented
format (e.g., ISO 8601 "YYYY-MM-DD" or a Unix timestamp) and update the code
that populates {date} to format the datetime accordingly so filenames are
deterministic and consistent across runs.
- Around line 50-56: The table of input patterns (keys: epics,
previous_retrospective, architecture, prd, document_project) can silently match
zero or many files; add validation in the input resolution path (e.g., a
function like validateInputPatterns or the code that implements
SELECTIVE_LOAD/FULL_LOAD/INDEX_GUIDED) that resolves each glob, asserts required
inputs (epics, architecture, prd) have at least one matched file, and emits a
clear error or warning (fail the workflow for required inputs, warn/skip for
optional ones like previous_retrospective/document_project) including the input
key and the expanded file list so users know which pattern failed.
In `@src/bmm/workflows/4-implementation/sprint-status/workflow.md`:
- Around line 18-24: Normalize variable reference syntax in the INITIALIZATION
section by choosing one style and applying it consistently: replace the mixed
usage of backticks (project_name, user_name, communication_language,
document_output_language, implementation_artifacts, date) and the
runtime-interpolation `{communication_language}` with a single convention—prefer
using runtime-style braces for all runtime values (e.g., {project_name},
{user_name}, {communication_language}, {document_output_language},
{implementation_artifacts}, {date}) or else use backticks for all as pure
documentation; also update the instruction line "YOU MUST ALWAYS SPEAK
OUTPUT..." to match that chosen style so agents clearly understand these are
runtime-config variables.
In `@src/bmm/workflows/qa-generate-e2e-tests/workflow.md`:
- Around line 141-143: The workflow references `{checklist}` but never verifies
the file exists; add a presence check during INITIALIZATION (or at start of
EXECUTION) that resolves the `{checklist}` path and asserts accessibility, e.g.,
attempt to stat/read `{checklist}` and if missing emit a clear error message and
fail fast (or optionally create/seed a default checklist), ensuring the
validation step that writes "Validate against `{checklist}`" won't run against a
non-existent file; update the workflow.md INITIALIZATION/EXECUTION section to
perform this check and produce a descriptive error before proceeding.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b1040289-2678-49f3-9234-47d5041d1882
⛔ Files ignored due to path filters (1)
src/bmm/module-help.csvis excluded by!**/*.csv
📒 Files selected for processing (9)
SPRINT_STATUS_CONVERSION_TEST_RESULTS.mdsrc/bmm/agents/qa.agent.yamlsrc/bmm/agents/sm.agent.yamlsrc/bmm/workflows/4-implementation/retrospective/workflow.mdsrc/bmm/workflows/4-implementation/retrospective/workflow.yamlsrc/bmm/workflows/4-implementation/sprint-status/workflow.mdsrc/bmm/workflows/4-implementation/sprint-status/workflow.yamlsrc/bmm/workflows/qa-generate-e2e-tests/workflow.mdsrc/bmm/workflows/qa-generate-e2e-tests/workflow.yaml
💤 Files with no reviewable changes (3)
- src/bmm/workflows/4-implementation/retrospective/workflow.yaml
- src/bmm/workflows/4-implementation/sprint-status/workflow.yaml
- src/bmm/workflows/qa-generate-e2e-tests/workflow.yaml
Triage Results for CodeRabbit Review Body FindingsThe following findings from the review body (outside-diff and nitpick comments) have been triaged: Outside-diff findings
Nitpick findings
|
…ingle workflow.md Merge workflow config and instruction content into a unified workflow.md with YAML frontmatter, following the established convention for converted workflows. Update module-help.csv reference accordingly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…version The <critical> preamble removal incorrectly classified this behavioral rule as boilerplate. It is an actual output constraint. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- 14 test fixtures covering data and validate modes - Tested across Opus, Sonnet, and Haiku models - OLD format (yaml+md) vs NEW format (workflow.md) - Confirms zero regressions in conversion - Includes reproduction instructions for future sessions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
….md to single workflow.md Task 2 of yaml-to-md conversion plan. Merges config variables into INITIALIZATION section, inlines instructions into EXECUTION section. Drops non-consumed yaml keys (required_tools, tags, execution_hints). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ingle workflow.md Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…kflows Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
SPRINT_STATUS_CONVERSION_TEST_RESULTS.md contains hardcoded local filesystem paths and is a session-specific test artifact. Added to .bare/info/exclude to keep it ignored across all worktrees. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2073a64 to
e4d93ed
Compare
Summary
Convert 3 workflows from legacy two-file format (workflow.yaml + instructions.md) to single self-contained workflow.md:
Each conversion:
<critical>tags (workflow.xml references)Steps 4-9 (sprint-planning, correct-course, code-review, create-story, dev-story, document-project) will follow in subsequent PRs.
Testing
Step 1 (sprint-status) — 14-fixture deterministic test suite across 3 models
Ran OLD vs NEW format through 14 test fixtures in data mode (step 20) and validate mode (step 30) across Opus, Sonnet, and Haiku. All deterministic outputs matched. Full results in
SPRINT_STATUS_CONVERSION_TEST_RESULTS.md.Key findings:
Step 2 (qa-generate-e2e-tests) — simulated-human test with real codebase
Two runs of OLD vs NEW format (4 total agent executions) on the bmad-assist Python project produced identical step sequences, same questions to the simulated human, same framework detection (pytest), same summary structure, and same checklist coverage. Zero conversion bugs.
Also confirmed via
git log -Sthatrequired_tools,tags, andexecution_hintsyaml keys were never consumed by any code — safe to drop.Step 3 (retrospective) — adversarial review
Three-reviewer adversarial review (blind hunter, edge case hunter, acceptance auditor). All 5 acceptance criteria passed. No critical or major findings. 5 minor cosmetic observations, all rejected as non-issues.
Test plan
Generated with Claude Code