docs: fix contradictory Quinn workflow placement in testing reference#1911
Conversation
The testing reference page incorrectly described Quinn's Automate workflow as running per-story before Code Review, contradicting the workflow map which positions it after epic completion. Align the testing page with the workflow map: Quinn runs after all stories in an epic are implemented and code-reviewed.
📝 WalkthroughWalkthroughThe PR updates documentation to clarify Quinn's Automate workflow placement, repositioning it from a per-story activity to an epic-level activity that runs after all stories are implemented and code-reviewed, resolving contradictions between reference pages. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/reference/testing.md`:
- Around line 98-102: Reconcile the two Quinn usage recommendations by adding
one sentence to the Phase 4 paragraph (the hunk describing Quinn in
Implementation) that clarifies Quinn supports both lightweight, ad-hoc "quick
test coverage for a new or existing feature" use cases and a formal post-epic
test-generation step; reference "Quinn" and the earlier "quick test coverage for
a new or existing feature" guidance and state when to prefer each (use
ad-hoc/feature-level for fast validation during development, and use the
post-epic/Phase 4 flow for comprehensive epic-wide test generation and
consistency).
- Around line 98-101: The text under "Quinn's Automate workflow" in Phase 4 is
missing the "Correct Course" gate from the per-story loop; update the paragraph
and sequence to explicitly mention that Automate runs after each story has been
implemented, code-reviewed, and passed the Correct Course step (or phrase it as
"implemented, code-reviewed, and corrected via Correct Course") and adjust the
numbered sequence to include Correct Course before the epic-level Automate step
so the flow reads: implement with Dev (DS), validate with Code Review (CR),
apply Correct Course (CC) per story, then after the epic is complete generate
tests with Quinn (QA) or TEA's Automate workflow.
- Around line 101-102: testing.md asserts a fixed ordering where step "3. Run
retrospective (`bmad-retrospective`) to capture lessons learned" immediately
follows Automate, but workflow-map.md Phase 4 is marked incomplete and only
generically lists "Review after epic completion"; update either file to avoid a
contradictory definitive sequence: either (A) edit workflow-map.md Phase 4 to
include the explicit sequencing and step details for retrospectives to match
testing.md, referencing the Phase 4 header and any Phase 4 automation notes, or
(B) soften testing.md step 3 to indicate the retrospective timing is subject to
the final Phase 4 automation/design (e.g., change language to "Run retrospective
as defined by Phase 4 automation"), so both documents remain consistent.
- Line 101: The current line conflates timing for Quinn and TEA: update the
sentence that reads "After the epic is complete: generate tests with Quinn
(`QA`) or TEA's Automate workflow" to separate the two workflows and their
timing—state that Quinn/QA test generation occurs after epic completion, while
TEA's Automate runs iteratively during implementation as stories are implemented
and stabilized (reference the phrase "generate tests with Quinn (`QA`) or TEA's
Automate workflow" and the term "TEA's Automate" to locate the text); rewrite
the guidance so it clearly instructs Quinn for post-epic test generation and
TEA's Automate for ongoing, per-story automation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 822212f4-d292-4cca-9b53-db647aa8dd17
📒 Files selected for processing (1)
docs/reference/testing.md
| Quinn's Automate workflow appears in Phase 4 (Implementation) of the BMad Method workflow map. It is designed to run **after a full epic is complete** — once all stories in an epic have been implemented and code-reviewed. A typical sequence: | ||
|
|
||
| 1. Implement a story with the Dev workflow (`DS`) | ||
| 2. Generate tests with Quinn (`QA`) or TEA's Automate workflow | ||
| 3. Validate implementation with Code Review (`CR`) | ||
| 1. For each story in the epic: implement with Dev (`DS`), then validate with Code Review (`CR`) | ||
| 2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow | ||
| 3. Run retrospective (`bmad-retrospective`) to capture lessons learned |
There was a problem hiding this comment.
The new workflow guidance now fights the earlier “quick feature coverage” positioning.
Earlier in this page, Quinn is recommended for “quick test coverage for a new or existing feature,” which reads like an ad hoc or feature-level use case. This hunk now presents Quinn as a formal post-epic activity. Add one sentence reconciling those two modes, or readers will wonder which guidance is actually authoritative.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/reference/testing.md` around lines 98 - 102, Reconcile the two Quinn
usage recommendations by adding one sentence to the Phase 4 paragraph (the hunk
describing Quinn in Implementation) that clarifies Quinn supports both
lightweight, ad-hoc "quick test coverage for a new or existing feature" use
cases and a formal post-epic test-generation step; reference "Quinn" and the
earlier "quick test coverage for a new or existing feature" guidance and state
when to prefer each (use ad-hoc/feature-level for fast validation during
development, and use the post-epic/Phase 4 flow for comprehensive epic-wide test
generation and consistency).
| Quinn's Automate workflow appears in Phase 4 (Implementation) of the BMad Method workflow map. It is designed to run **after a full epic is complete** — once all stories in an epic have been implemented and code-reviewed. A typical sequence: | ||
|
|
||
| 1. Implement a story with the Dev workflow (`DS`) | ||
| 2. Generate tests with Quinn (`QA`) or TEA's Automate workflow | ||
| 3. Validate implementation with Code Review (`CR`) | ||
| 1. For each story in the epic: implement with Dev (`DS`), then validate with Code Review (`CR`) | ||
| 2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow |
There was a problem hiding this comment.
This still looks short of the documented gate sequence.
The linked issue says Automate sits after Code Review and Correct Course, but this text only says “implemented and code-reviewed.” If Correct Course is part of the required per-story loop, the contradiction is only half fixed.
Suggested doc tweak
-Quinn's Automate workflow appears in Phase 4 (Implementation) of the BMad Method workflow map. It is designed to run **after a full epic is complete** — once all stories in an epic have been implemented and code-reviewed. A typical sequence:
+Quinn's Automate workflow appears in Phase 4 (Implementation) of the BMad Method workflow map. It is designed to run after the implementation loop for an epic is complete — once all stories in the epic have been implemented, code-reviewed, and any Correct Course follow-up has been handled. A typical sequence:
-1. For each story in the epic: implement with Dev (`DS`), then validate with Code Review (`CR`)
-2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow
+1. For each story in the epic: implement with Dev (`DS`), validate with Code Review (`CR`), and handle Correct Course if needed
+2. After every story in the epic has cleared that loop: generate tests with Quinn (`QA`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/reference/testing.md` around lines 98 - 101, The text under "Quinn's
Automate workflow" in Phase 4 is missing the "Correct Course" gate from the
per-story loop; update the paragraph and sequence to explicitly mention that
Automate runs after each story has been implemented, code-reviewed, and passed
the Correct Course step (or phrase it as "implemented, code-reviewed, and
corrected via Correct Course") and adjust the numbered sequence to include
Correct Course before the epic-level Automate step so the flow reads: implement
with Dev (DS), validate with Code Review (CR), apply Correct Course (CC) per
story, then after the epic is complete generate tests with Quinn (QA) or TEA's
Automate workflow.
| 2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow | ||
| 3. Run retrospective (`bmad-retrospective`) to capture lessons learned |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
file="$(fd '^workflow-map\.md$' docs src | head -n1)"
echo "Inspecting: $file"
nl -ba "$file" | sed -n '1,260p' | rg -n -C3 'Phase 4|Automate|QA|retrospective|Correct Course'Repository: bmad-code-org/BMAD-METHOD
Length of output: 169
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find and inspect workflow-map.md
echo "=== WORKFLOW-MAP.MD ==="
file="$(fd '^workflow-map\.md$' docs src 2>/dev/null | head -n1)"
if [ -n "$file" ]; then
echo "File: $file"
cat "$file" | grep -n -i -E 'Phase 4|Automate|QA|retrospective|Correct Course' | head -20
else
echo "File not found"
fi
echo ""
echo "=== TESTING.MD CONTEXT ==="
# Inspect testing.md around lines 101-102
test_file="$(fd '^testing\.md$' docs src 2>/dev/null | head -n1)"
if [ -n "$test_file" ]; then
echo "File: $test_file"
sed -n '95,110p' "$test_file"
else
echo "File not found"
fiRepository: bmad-code-org/BMAD-METHOD
Length of output: 1239
Verify ordering claim against the workflow-map.
testing.md step 3 (retrospective immediately after Automate) is presented as a definitive sequence, but workflow-map.md Phase 4 is explicitly incomplete ("Coming soon, full phase 4 automation!"). The map lists retrospective generically as "Review after epic completion" without documenting the detailed step-by-step ordering. Either add explicit sequencing detail to workflow-map.md Phase 4, or soften testing.md's claim to acknowledge that the exact ordering depends on final Phase 4 automation design.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/reference/testing.md` around lines 101 - 102, testing.md asserts a fixed
ordering where step "3. Run retrospective (`bmad-retrospective`) to capture
lessons learned" immediately follows Automate, but workflow-map.md Phase 4 is
marked incomplete and only generically lists "Review after epic completion";
update either file to avoid a contradictory definitive sequence: either (A) edit
workflow-map.md Phase 4 to include the explicit sequencing and step details for
retrospectives to match testing.md, referencing the Phase 4 header and any Phase
4 automation notes, or (B) soften testing.md step 3 to indicate the
retrospective timing is subject to the final Phase 4 automation/design (e.g.,
change language to "Run retrospective as defined by Phase 4 automation"), so
both documents remain consistent.
| 2. Generate tests with Quinn (`QA`) or TEA's Automate workflow | ||
| 3. Validate implementation with Code Review (`CR`) | ||
| 1. For each story in the epic: implement with Dev (`DS`), then validate with Code Review (`CR`) | ||
| 2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
BMAD Test Architect enterprise Automate workflow when should it run after full epic complete
💡 Result:
For BMAD Test Architect (TEA) / enterprise-style testing, you generally should not wait until the entire epic is finished to run *automate.
- At the start of the epic (before implementing stories): run
*test-design(epic-level) to set priorities, risks, and the coverage plan that will drive what you automate. [1] - During the epic (as stories are implemented / stabilize): run
*automateiteratively to generate the tests for the “existing features” you just built, rather than batching everything at the end. (BMAD positions automation in the implementation (Phase 4) track.) [3][4] - After each story and/or at epic checkpoints: run
tracePhase 1 to refresh coverage and identify gaps; then add/adjust tests (often via*automate) based on what trace reports. [2] - After the epic is complete: you can run trace again (Epic gate if you’re gating) and then do the epic retrospective—but that’s a review/decision point, not the best first time to generate most tests. [2][3]
Rule of thumb: automate continuously within the epic, and use trace after each story/epic to confirm coverage, instead of deferring automation until the epic is fully complete. [2][3]
Sources: [1] [2] [3] [4]
TEA's Automate workflow runs iteratively during implementation, not after the epic is complete.
The snippet conflates Quinn's testing sequence with TEA's Automate timing. TEA's Automate should run iteratively during the epic (as stories are implemented and stabilize), not deferred until epic completion. Revise to separate these timings and align TEA's guidance with its iterative automation model.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/reference/testing.md` at line 101, The current line conflates timing for
Quinn and TEA: update the sentence that reads "After the epic is complete:
generate tests with Quinn (`QA`) or TEA's Automate workflow" to separate the two
workflows and their timing—state that Quinn/QA test generation occurs after epic
completion, while TEA's Automate runs iteratively during implementation as
stories are implemented and stabilized (reference the phrase "generate tests
with Quinn (`QA`) or TEA's Automate workflow" and the term "TEA's Automate" to
locate the text); rewrite the guidance so it clearly instructs Quinn for
post-epic test generation and TEA's Automate for ongoing, per-story automation.
|
good catch - actual there should be no reference in this doc to TEA as that is a separate module. But this is an improvement as is, thanks @nikolasdehor |
…bmad-code-org#1911) The testing reference page incorrectly described Quinn's Automate workflow as running per-story before Code Review, contradicting the workflow map which positions it after epic completion. Align the testing page with the workflow map: Quinn runs after all stories in an epic are implemented and code-reviewed.
Summary
Fixes #1759
The testing reference page incorrectly described Quinn's Automate workflow as running per-story before Code Review, contradicting the workflow map which positions it after epic completion. This creates confusion about when to run Quinn.
Changes
docs/reference/testing.md: Updated "How Testing Fits into Workflows" section to align with the workflow map — Quinn runs after a full epic is complete (all stories implemented and code-reviewed), not per-story before Code ReviewTest plan
docs/reference/workflow-map.md