Skip to content

docs: fix contradictory Quinn workflow placement in testing reference#1911

Merged
bmadcode merged 1 commit intobmad-code-org:mainfrom
nikolasdehor:docs/fix-quinn-workflow-placement
Mar 12, 2026
Merged

docs: fix contradictory Quinn workflow placement in testing reference#1911
bmadcode merged 1 commit intobmad-code-org:mainfrom
nikolasdehor:docs/fix-quinn-workflow-placement

Conversation

@nikolasdehor
Copy link
Copy Markdown
Contributor

Summary

Fixes #1759

The testing reference page incorrectly described Quinn's Automate workflow as running per-story before Code Review, contradicting the workflow map which positions it after epic completion. This creates confusion about when to run Quinn.

Changes

  • docs/reference/testing.md: Updated "How Testing Fits into Workflows" section to align with the workflow map — Quinn runs after a full epic is complete (all stories implemented and code-reviewed), not per-story before Code Review

Test plan

  • Verify the updated sequence matches the Phase 4 table in docs/reference/workflow-map.md
  • Review rendered documentation for clarity

The testing reference page incorrectly described Quinn's Automate
workflow as running per-story before Code Review, contradicting the
workflow map which positions it after epic completion. Align the testing
page with the workflow map: Quinn runs after all stories in an epic are
implemented and code-reviewed.
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 12, 2026

📝 Walkthrough

Walkthrough

The PR updates documentation to clarify Quinn's Automate workflow placement, repositioning it from a per-story activity to an epic-level activity that runs after all stories are implemented and code-reviewed, resolving contradictions between reference pages.

Changes

Cohort / File(s) Summary
Documentation Update
docs/reference/testing.md
Reorganized workflow sequencing to clarify that Quinn's Automate/TEA workflow runs at the epic level after all stories complete, not per-story; updated accompanying prose and workflow diagram description.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

  • docs: tea editorial review #1313: Restructures TEA lifecycle and epic-level workflow guidance documentation similarly to clarify phase and integration point placement.

Suggested reviewers

  • muratkeremozcan
  • bmadcode
🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: fixing contradictory placement of Quinn workflow in documentation.
Description check ✅ Passed The description is directly related to the changeset, explaining the documentation inconsistency being fixed and the changes made.
Linked Issues check ✅ Passed The PR successfully addresses the linked issue #1759 by updating docs/reference/testing.md to align Quinn's Automate workflow placement with the workflow map (epic-level, after all stories and code reviews).
Out of Scope Changes check ✅ Passed All changes are scoped to resolving the documentation inconsistency in issue #1759 with no extraneous modifications detected.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/reference/testing.md`:
- Around line 98-102: Reconcile the two Quinn usage recommendations by adding
one sentence to the Phase 4 paragraph (the hunk describing Quinn in
Implementation) that clarifies Quinn supports both lightweight, ad-hoc "quick
test coverage for a new or existing feature" use cases and a formal post-epic
test-generation step; reference "Quinn" and the earlier "quick test coverage for
a new or existing feature" guidance and state when to prefer each (use
ad-hoc/feature-level for fast validation during development, and use the
post-epic/Phase 4 flow for comprehensive epic-wide test generation and
consistency).
- Around line 98-101: The text under "Quinn's Automate workflow" in Phase 4 is
missing the "Correct Course" gate from the per-story loop; update the paragraph
and sequence to explicitly mention that Automate runs after each story has been
implemented, code-reviewed, and passed the Correct Course step (or phrase it as
"implemented, code-reviewed, and corrected via Correct Course") and adjust the
numbered sequence to include Correct Course before the epic-level Automate step
so the flow reads: implement with Dev (DS), validate with Code Review (CR),
apply Correct Course (CC) per story, then after the epic is complete generate
tests with Quinn (QA) or TEA's Automate workflow.
- Around line 101-102: testing.md asserts a fixed ordering where step "3. Run
retrospective (`bmad-retrospective`) to capture lessons learned" immediately
follows Automate, but workflow-map.md Phase 4 is marked incomplete and only
generically lists "Review after epic completion"; update either file to avoid a
contradictory definitive sequence: either (A) edit workflow-map.md Phase 4 to
include the explicit sequencing and step details for retrospectives to match
testing.md, referencing the Phase 4 header and any Phase 4 automation notes, or
(B) soften testing.md step 3 to indicate the retrospective timing is subject to
the final Phase 4 automation/design (e.g., change language to "Run retrospective
as defined by Phase 4 automation"), so both documents remain consistent.
- Line 101: The current line conflates timing for Quinn and TEA: update the
sentence that reads "After the epic is complete: generate tests with Quinn
(`QA`) or TEA's Automate workflow" to separate the two workflows and their
timing—state that Quinn/QA test generation occurs after epic completion, while
TEA's Automate runs iteratively during implementation as stories are implemented
and stabilized (reference the phrase "generate tests with Quinn (`QA`) or TEA's
Automate workflow" and the term "TEA's Automate" to locate the text); rewrite
the guidance so it clearly instructs Quinn for post-epic test generation and
TEA's Automate for ongoing, per-story automation.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 822212f4-d292-4cca-9b53-db647aa8dd17

📥 Commits

Reviewing files that changed from the base of the PR and between 32693f1 and dd079d4.

📒 Files selected for processing (1)
  • docs/reference/testing.md

Comment on lines +98 to +102
Quinn's Automate workflow appears in Phase 4 (Implementation) of the BMad Method workflow map. It is designed to run **after a full epic is complete** — once all stories in an epic have been implemented and code-reviewed. A typical sequence:

1. Implement a story with the Dev workflow (`DS`)
2. Generate tests with Quinn (`QA`) or TEA's Automate workflow
3. Validate implementation with Code Review (`CR`)
1. For each story in the epic: implement with Dev (`DS`), then validate with Code Review (`CR`)
2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow
3. Run retrospective (`bmad-retrospective`) to capture lessons learned
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The new workflow guidance now fights the earlier “quick feature coverage” positioning.

Earlier in this page, Quinn is recommended for “quick test coverage for a new or existing feature,” which reads like an ad hoc or feature-level use case. This hunk now presents Quinn as a formal post-epic activity. Add one sentence reconciling those two modes, or readers will wonder which guidance is actually authoritative.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/reference/testing.md` around lines 98 - 102, Reconcile the two Quinn
usage recommendations by adding one sentence to the Phase 4 paragraph (the hunk
describing Quinn in Implementation) that clarifies Quinn supports both
lightweight, ad-hoc "quick test coverage for a new or existing feature" use
cases and a formal post-epic test-generation step; reference "Quinn" and the
earlier "quick test coverage for a new or existing feature" guidance and state
when to prefer each (use ad-hoc/feature-level for fast validation during
development, and use the post-epic/Phase 4 flow for comprehensive epic-wide test
generation and consistency).

Comment on lines +98 to +101
Quinn's Automate workflow appears in Phase 4 (Implementation) of the BMad Method workflow map. It is designed to run **after a full epic is complete** — once all stories in an epic have been implemented and code-reviewed. A typical sequence:

1. Implement a story with the Dev workflow (`DS`)
2. Generate tests with Quinn (`QA`) or TEA's Automate workflow
3. Validate implementation with Code Review (`CR`)
1. For each story in the epic: implement with Dev (`DS`), then validate with Code Review (`CR`)
2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This still looks short of the documented gate sequence.

The linked issue says Automate sits after Code Review and Correct Course, but this text only says “implemented and code-reviewed.” If Correct Course is part of the required per-story loop, the contradiction is only half fixed.

Suggested doc tweak
-Quinn's Automate workflow appears in Phase 4 (Implementation) of the BMad Method workflow map. It is designed to run **after a full epic is complete** — once all stories in an epic have been implemented and code-reviewed. A typical sequence:
+Quinn's Automate workflow appears in Phase 4 (Implementation) of the BMad Method workflow map. It is designed to run after the implementation loop for an epic is complete — once all stories in the epic have been implemented, code-reviewed, and any Correct Course follow-up has been handled. A typical sequence:
 
-1. For each story in the epic: implement with Dev (`DS`), then validate with Code Review (`CR`)
-2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow
+1. For each story in the epic: implement with Dev (`DS`), validate with Code Review (`CR`), and handle Correct Course if needed
+2. After every story in the epic has cleared that loop: generate tests with Quinn (`QA`)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/reference/testing.md` around lines 98 - 101, The text under "Quinn's
Automate workflow" in Phase 4 is missing the "Correct Course" gate from the
per-story loop; update the paragraph and sequence to explicitly mention that
Automate runs after each story has been implemented, code-reviewed, and passed
the Correct Course step (or phrase it as "implemented, code-reviewed, and
corrected via Correct Course") and adjust the numbered sequence to include
Correct Course before the epic-level Automate step so the flow reads: implement
with Dev (DS), validate with Code Review (CR), apply Correct Course (CC) per
story, then after the epic is complete generate tests with Quinn (QA) or TEA's
Automate workflow.

Comment on lines +101 to +102
2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow
3. Run retrospective (`bmad-retrospective`) to capture lessons learned
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

file="$(fd '^workflow-map\.md$' docs src | head -n1)"
echo "Inspecting: $file"
nl -ba "$file" | sed -n '1,260p' | rg -n -C3 'Phase 4|Automate|QA|retrospective|Correct Course'

Repository: bmad-code-org/BMAD-METHOD

Length of output: 169


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Find and inspect workflow-map.md
echo "=== WORKFLOW-MAP.MD ==="
file="$(fd '^workflow-map\.md$' docs src 2>/dev/null | head -n1)"
if [ -n "$file" ]; then
  echo "File: $file"
  cat "$file" | grep -n -i -E 'Phase 4|Automate|QA|retrospective|Correct Course' | head -20
else
  echo "File not found"
fi

echo ""
echo "=== TESTING.MD CONTEXT ==="
# Inspect testing.md around lines 101-102
test_file="$(fd '^testing\.md$' docs src 2>/dev/null | head -n1)"
if [ -n "$test_file" ]; then
  echo "File: $test_file"
  sed -n '95,110p' "$test_file"
else
  echo "File not found"
fi

Repository: bmad-code-org/BMAD-METHOD

Length of output: 1239


Verify ordering claim against the workflow-map.

testing.md step 3 (retrospective immediately after Automate) is presented as a definitive sequence, but workflow-map.md Phase 4 is explicitly incomplete ("Coming soon, full phase 4 automation!"). The map lists retrospective generically as "Review after epic completion" without documenting the detailed step-by-step ordering. Either add explicit sequencing detail to workflow-map.md Phase 4, or soften testing.md's claim to acknowledge that the exact ordering depends on final Phase 4 automation design.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/reference/testing.md` around lines 101 - 102, testing.md asserts a fixed
ordering where step "3. Run retrospective (`bmad-retrospective`) to capture
lessons learned" immediately follows Automate, but workflow-map.md Phase 4 is
marked incomplete and only generically lists "Review after epic completion";
update either file to avoid a contradictory definitive sequence: either (A) edit
workflow-map.md Phase 4 to include the explicit sequencing and step details for
retrospectives to match testing.md, referencing the Phase 4 header and any Phase
4 automation notes, or (B) soften testing.md step 3 to indicate the
retrospective timing is subject to the final Phase 4 automation/design (e.g.,
change language to "Run retrospective as defined by Phase 4 automation"), so
both documents remain consistent.

2. Generate tests with Quinn (`QA`) or TEA's Automate workflow
3. Validate implementation with Code Review (`CR`)
1. For each story in the epic: implement with Dev (`DS`), then validate with Code Review (`CR`)
2. After the epic is complete: generate tests with Quinn (`QA`) or TEA's Automate workflow
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

BMAD Test Architect enterprise Automate workflow when should it run after full epic complete

💡 Result:

For BMAD Test Architect (TEA) / enterprise-style testing, you generally should not wait until the entire epic is finished to run *automate.

  • At the start of the epic (before implementing stories): run *test-design (epic-level) to set priorities, risks, and the coverage plan that will drive what you automate. [1]
  • During the epic (as stories are implemented / stabilize): run *automate iteratively to generate the tests for the “existing features” you just built, rather than batching everything at the end. (BMAD positions automation in the implementation (Phase 4) track.) [3][4]
  • After each story and/or at epic checkpoints: run trace Phase 1 to refresh coverage and identify gaps; then add/adjust tests (often via *automate) based on what trace reports. [2]
  • After the epic is complete: you can run trace again (Epic gate if you’re gating) and then do the epic retrospective—but that’s a review/decision point, not the best first time to generate most tests. [2][3]

Rule of thumb: automate continuously within the epic, and use trace after each story/epic to confirm coverage, instead of deferring automation until the epic is fully complete. [2][3]

Sources: [1] [2] [3] [4]


TEA's Automate workflow runs iteratively during implementation, not after the epic is complete.

The snippet conflates Quinn's testing sequence with TEA's Automate timing. TEA's Automate should run iteratively during the epic (as stories are implemented and stabilize), not deferred until epic completion. Revise to separate these timings and align TEA's guidance with its iterative automation model.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/reference/testing.md` at line 101, The current line conflates timing for
Quinn and TEA: update the sentence that reads "After the epic is complete:
generate tests with Quinn (`QA`) or TEA's Automate workflow" to separate the two
workflows and their timing—state that Quinn/QA test generation occurs after epic
completion, while TEA's Automate runs iteratively during implementation as
stories are implemented and stabilized (reference the phrase "generate tests
with Quinn (`QA`) or TEA's Automate workflow" and the term "TEA's Automate" to
locate the text); rewrite the guidance so it clearly instructs Quinn for
post-epic test generation and TEA's Automate for ongoing, per-story automation.

@bmadcode
Copy link
Copy Markdown
Collaborator

good catch - actual there should be no reference in this doc to TEA as that is a separate module. But this is an improvement as is, thanks @nikolasdehor

@bmadcode bmadcode merged commit df9a7f9 into bmad-code-org:main Mar 12, 2026
alexeyv pushed a commit to alexeyv/BMAD-METHOD that referenced this pull request Mar 13, 2026
…bmad-code-org#1911)

The testing reference page incorrectly described Quinn's Automate
workflow as running per-story before Code Review, contradicting the
workflow map which positions it after epic completion. Align the testing
page with the workflow map: Quinn runs after all stories in an epic are
implemented and code-reviewed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[DOCS] Contradictory placement of Quinn's Automate workflow between Workflow Map and Testing Reference

2 participants