Skip to content

refactor(skills): convert editorial-review-prose.xml to native skill#1877

Merged
alexeyv merged 3 commits intobmad-code-org:mainfrom
alexeyv:refactor/convert-editorial-review-prose
Mar 11, 2026
Merged

refactor(skills): convert editorial-review-prose.xml to native skill#1877
alexeyv merged 3 commits intobmad-code-org:mainfrom
alexeyv:refactor/convert-editorial-review-prose

Conversation

@alexeyv
Copy link
Copy Markdown
Collaborator

@alexeyv alexeyv commented Mar 9, 2026

Summary

  • Convert editorial-review-prose.xml from single-file XML task to native skill directory (SKILL.md + workflow.md + bmad-skill-manifest.yaml)
  • Update parent manifest and module-help.csv references
  • No behavioral changes — all inputs, principles, steps, halt conditions, and output format preserved exactly

Test plan

  • Run bmad-cli.js install and verify bmad-editorial-review-prose skill appears in .claude/skills/
  • Invoke the skill and confirm behavior matches original XML task
  • Verify no remaining references to editorial-review-prose.xml in source tree

🤖 Generated with Claude Code

@alexeyv alexeyv marked this pull request as ready for review March 9, 2026 16:45
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 9, 2026

@coderabbitai review

@augmentcode
Copy link
Copy Markdown

augmentcode bot commented Mar 9, 2026

🤖 Augment PR Summary

Summary: This PR refactors the existing editorial-review-prose.xml task into a native “skill” directory while keeping the editorial instructions intact.

Changes:

  • Added a new native skill at src/core/tasks/bmad-editorial-review-prose/ with SKILL.md, workflow.md, and a bmad-skill-manifest.yaml declaring type: skill.
  • Moved the full prompt content (inputs, principles, step-by-step execution flow, halt conditions, and output format) into workflow.md.
  • Removed the legacy single-file XML task src/core/tasks/editorial-review-prose.xml.
  • Updated the parent task manifest (src/core/tasks/bmad-skill-manifest.yaml) to drop the old XML mapping for this task.
  • Adjusted module help/registry references (via module-help.csv) to point at the new skill (file skipped in this review per instructions).

Technical Notes: The new skill follows the existing native-skill pattern used by other skills in this repo (directory name as canonical ID; manifest type: skill; metadata in SKILL.md frontmatter; detailed behavior in workflow.md).

🤖 Was this summary useful? React with 👍 or 👎

Copy link
Copy Markdown

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. No suggestions at this time.

Comment augment review to trigger a new review at any time.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 9, 2026

📝 Walkthrough

Walkthrough

This PR converts the Editorial Review - Prose task from an XML-based definition to a new skill-based structure using Markdown documentation. The migration removes the legacy XML task file and its manifest entry, replacing them with new skill files including SKILL.md documentation, workflow.md instructions, and a skill manifest.

Changes

Cohort / File(s) Summary
Legacy XML Task Removal
src/core/tasks/editorial-review-prose.xml, src/core/tasks/bmad-skill-manifest.yaml
Removes the old XML-based Editorial Review - Prose task definition and its manifest entry referencing the task as canonical ID bmad-editorial-review-prose.
New Skill Structure
src/core/tasks/bmad-editorial-review-prose/SKILL.md, src/core/tasks/bmad-editorial-review-prose/bmad-skill-manifest.yaml, src/core/tasks/bmad-editorial-review-prose/workflow.md
Introduces new skill-based structure with YAML front matter declaring the skill, a manifest file identifying it as a skill type, and a detailed workflow document specifying editorial review process with input validation, style analysis, and three-column Markdown table output format.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • bmadcode
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely summarizes the main change: converting an XML task file to a native skill directory structure.
Description check ✅ Passed The description clearly explains the conversion from XML to native skill format, mentions preserved behavior and references, and provides a test plan aligned with the changeset.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (9)
src/core/tasks/bmad-editorial-review-prose/SKILL.md (2)

6-6: SKILL.md body is too minimal—missing essential context.

The body provides zero information about inputs, outputs, or usage. Users must navigate to workflow.md to learn anything about the skill. At minimum, document the required and optional inputs here (content, style_guide, reader_type) and the output format (3-column table).

📋 Suggested expansion
Follow the instructions in [workflow.md](workflow.md).

## Inputs

- **content** (required) — Text to review (markdown, plain text, or text-heavy XML)
- **style_guide** (optional) — Project-specific style guide that overrides default principles
- **reader_type** (optional, default: humans) — Target audience: `humans` or `llm`

## Output

Three-column Markdown table: Original Text | Revised Text | Changes

See [workflow.md](workflow.md) for detailed execution steps and principles.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/tasks/bmad-editorial-review-prose/SKILL.md` at line 6, SKILL.md is
too minimal—expand the body to document expected inputs and outputs: add an
"Inputs" section listing content (required), style_guide (optional), reader_type
(optional, default "humans"), and an "Output" section describing the
three-column Markdown table format (Original Text | Revised Text | Changes);
reference workflow.md for execution details and keep the initial line "Follow
the instructions in [workflow.md](workflow.md)" at the top so readers see both
the quick pointer and the required input/output contract.

3-3: Description mixes capability with invocation trigger.

The description conflates what the skill does ("Clinical copy-editor that reviews text for communication issues") with when to invoke it ("Use when user says review for prose or improve the prose"). Capability descriptions should focus purely on what the skill accomplishes, not invocation patterns.

✏️ Cleaner description
-description: 'Clinical copy-editor that reviews text for communication issues. Use when user says review for prose or improve the prose'
+description: 'Reviews text for communication issues that impede comprehension and outputs a table of suggested editorial fixes'

The invocation trigger can be documented in the workflow body or a separate usage guide if needed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/tasks/bmad-editorial-review-prose/SKILL.md` at line 3, The
description in SKILL.md mixes capability and invocation; update the description
string (the value for description in the SKILL.md frontmatter) to a pure
capability statement such as "Clinical copy-editor that reviews text for
communication issues" and remove the invocation guidance ("Use when user says
review for prose or improve the prose"); move any invocation guidance into the
workflow/usage docs instead (document triggers in the workflow body or a
separate usage guide).
src/core/tasks/bmad-editorial-review-prose/bmad-skill-manifest.yaml (1)

1-1: Consider adding description field for consistency.

While type: skill alone is technically valid, other manifests in the codebase include a description field. Adding it here would improve consistency and potentially aid tooling that scans manifests without parsing SKILL.md frontmatter.

📝 Proposed addition
+canonicalId: bmad-editorial-review-prose
 type: skill
+description: Clinical copy-editor that reviews text for communication issues

Note: The canonicalId field will be ignored (derived from directory name), but including it documents the intended ID explicitly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/tasks/bmad-editorial-review-prose/bmad-skill-manifest.yaml` at line
1, The manifest only contains "type: skill" and should include a "description"
field for consistency with other manifests and tooling; update the
bmad-skill-manifest.yaml to add a brief "description:" entry describing the
skill and optionally include a "canonicalId:" field (noting it will be
ignored/derived) to document the intended ID—ensure the keys are added at
top-level alongside "type: skill" so tools scanning manifests or SKILL.md
frontmatter see the metadata.
src/core/tasks/bmad-editorial-review-prose/workflow.md (6)

43-45: reader_type selection guidance is too vague.

Lines 43-45 describe what each reader_type value prioritizes but don't provide clear guidance on WHEN to choose llm vs humans. Users won't know which to select without explicit use cases or decision criteria.

📖 Add selection guidance
 - Calibrate review approach based on reader_type:
-  - If `llm`: Prioritize unambiguous references, consistent terminology, explicit structure, no hedging
-  - If `humans`: Prioritize clarity, flow, readability, natural progression
+  - If `llm`: Use when text will be consumed by AI systems (prompts, API docs, structured data descriptions). Prioritize unambiguous references, consistent terminology, explicit structure, no hedging.
+  - If `humans`: Use when text will be read by people (documentation, UI copy, marketing). Prioritize clarity, flow, readability, natural progression.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/tasks/bmad-editorial-review-prose/workflow.md` around lines 43 - 45,
The guidance for reader_type selection is too vague; update the workflow.md
section that defines reader_type (the 'llm' vs 'humans' branch) to include
explicit decision criteria and concrete use cases: describe scenarios when to
pick 'llm' (e.g., when output will be consumed by downstream models, automated
pipelines, or parsers requiring strict unambiguous phrasing, consistent
terminology, explicit structure, and no hedging) and when to pick 'humans'
(e.g., for editorial reads, publishable prose, readability-focused reviews, or
audiences preferring natural flow and nuance), add a short checklist or bullet
criteria for each option, and include one or two brief examples per option to
make selection deterministic for users.

39-46: Awkward flow: style_guide consulted in Step 3 but style analyzed in Step 2.

Step 2 (lines 39-46) analyzes "style, tone, and voice" but doesn't consult the optional style_guide input. Then Step 3 (line 49) conditionally checks the style_guide for the first time. This creates an awkward flow where the agent analyzes style without knowing the authoritative style requirements.

Move the style_guide consultation to Step 2:

🔄 Reorder for logical flow
 ### Step 2: Analyze Style
 
+- If style_guide provided: Review style_guide now and note its key requirements — these override default principles
 - Analyze the style, tone, and voice of the input text
 - Note any intentional stylistic choices to preserve (informal tone, technical jargon, rhetorical patterns)
 - Calibrate review approach based on reader_type:

Then simplify Step 3 line 49:

 ### Step 3: Editorial Review (CRITICAL)
 
-- If style_guide provided: Consult style_guide now and note its key requirements — these override default principles for this review
+- Apply style_guide requirements noted in Step 2 (if provided)
 - Review all prose sections (skip code blocks, frontmatter, structural markup)

Also applies to: 49-49

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/tasks/bmad-editorial-review-prose/workflow.md` around lines 39 - 46,
Step 2: Analyze Style currently inspects "style, tone, and voice" before the
optional style_guide is consulted in Step 3; move the style_guide consultation
into Step 2 by updating the "Step 2: Analyze Style" bullet to explicitly consult
and apply the optional style_guide input (preserve the existing notes about
reader_type branching), and then simplify "Step 3" to assume style requirements
are already known and focus only on structure/content consistency and
enforcement (e.g., rename or reword Step 3 to "Step 3: Check Structure &
Compliance" or similar); update any conditional checks and wording in the Step 3
text to remove redundant style_guide handling so the flow is: Step 2 uses
style_guide, Step 3 verifies compliance.

77-81: Redundant HALT CONDITIONS section duplicates Step 1 validation.

Lines 77-81 repeat the same halt conditions already specified in Step 1 (lines 32-35). This redundancy creates maintenance burden—if validation logic changes, it must be updated in two places. Consider removing this section or adding a cross-reference instead: "See Step 1 for halt conditions."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/tasks/bmad-editorial-review-prose/workflow.md` around lines 77 - 81,
The HALT CONDITIONS section (header "HALT CONDITIONS" and its three bullets
mentioning empty/fewer-than-3-words, reader_type values, and the "No editorial
issues identified" message) duplicates validation already defined in Step 1;
remove this duplicate block and either delete it entirely or replace it with a
single cross-reference line such as "See Step 1 for halt conditions" so
validation is maintained in one place (update any occurrences of the literal "No
editorial issues identified" only if Step 1 changes it).

67-67: "Brief explanation" is undefined—how brief is brief?

Line 67 specifies "Brief explanation of what changed and why" but provides no guidance on brevity. Should explanations be one sentence? One clause? A few words? Without bounds, agents will produce inconsistent output—some will write detailed rationales, others terse labels.

📏 Add specificity
-| The exact original passage | The suggested revision | Brief explanation of what changed and why |
+| The exact original passage | The suggested revision | Brief explanation (one sentence max) of what changed and why |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/tasks/bmad-editorial-review-prose/workflow.md` at line 67, The table
header "Brief explanation of what changed and why" is ambiguous; update that
header to enforce a specific length and format (e.g., "Brief explanation (one
sentence, ≤140 characters)") and add a short guideline beneath the table or in a
tooltip section that clarifies "one sentence, max 140 characters or one short
clause" so agents produce consistent, concise rationale for changes; locate the
header text "Brief explanation of what changed and why" in workflow.md and
replace it and add the guideline text near the table.

71-74: Example table lacks edge case demonstrations.

The example table (lines 71-74) shows only simple cases: a grammar fix and a spelling fix. It doesn't demonstrate any of the challenging scenarios users will encounter:

  • Multi-line original text
  • Text with special characters or pipes
  • Uncertain fixes phrased as queries
  • Deduplicated issues with multiple locations

More diverse examples would improve clarity and reduce ambiguity.

📚 Add edge case examples
| For comprehensive details on installation see our setup guide which covers prerequisites | For comprehensive installation details, see our setup guide (covers prerequisites) | Reduced wordiness; improved flow with parenthetical |
| Consider: Replace "data is processed" with "we process data"? | | Query form: unsure if active voice is appropriate given formal tone |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/tasks/bmad-editorial-review-prose/workflow.md` around lines 71 - 74,
The example table that currently shows only the rows "The system will processes
data and it handles errors." and "Users can chose from options (lines 12, 45,
78)" needs additional edge-case examples: add rows demonstrating a multi-line
Original Text entry, an Original Text containing pipes/special characters, a
Revised Text left blank to represent an uncertain/query-style suggestion, and a
row showing a deduplicated issue with multiple locations (e.g., same Original
Text with a note like "appears at lines X, Y"); update the table schema to allow
a "Notes" or "Locations" column so uncertain fixes and multiple locations can be
recorded and ensure the new rows mirror the existing columns (Original Text |
Revised Text | Changes).

19-19: "Code blocks" undefined—what syntax triggers skipping?

Line 19 instructs to "Detect and skip code blocks, frontmatter, structural markup" but doesn't define what constitutes a "code block." Should the agent skip:

  • Fenced code blocks (```...```)?
  • Inline code (`...`)?
  • Indented code blocks (4-space indent)?
  • XML/HTML tags?

Without clear detection rules, agents will inconsistently identify skippable content.

🔍 Add detection rules
-3. **Skip code/markup:** Detect and skip code blocks, frontmatter, structural markup
+3. **Skip code/markup:** Skip fenced code blocks (```), YAML frontmatter (---...---), and structural markup (HTML/XML tags). Review inline code (backtick-wrapped) for clarity issues in surrounding prose only.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/tasks/bmad-editorial-review-prose/workflow.md` at line 19, The
instruction "Detect and skip code blocks, frontmatter, structural markup" is
ambiguous; update that line to explicitly list the syntaxes to skip: fenced code
blocks (```...```), indented code blocks (4-space or tab), YAML frontmatter
(---...---), and HTML/XML block tags; for inline code (`...`) only inspect
surrounding prose for clarity and do not remove or alter the inline code itself;
ensure the workflow rule named exactly "Detect and skip code blocks,
frontmatter, structural markup" is replaced with these explicit detection rules
so agents consistently identify and skip the correct constructs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/core/tasks/bmad-editorial-review-prose/workflow.md`:
- Line 5: Update the incorrect section reference in the intro sentence that
reads "Execute ALL steps in the flow section IN EXACT ORDER" to reference the
actual section name "EXECUTION"; locate that string in workflow.md (the exact
phrase to change) and replace "flow section" with "EXECUTION" so the instruction
correctly points to the section starting at the "EXECUTION" header.
- Line 11: Remove the redundant documentation of the style_guide override from
the Inputs section: delete the brief sentence "- **style_guide** (optional) —
Project-specific style guide. When provided, overrides all generic principles in
this task (except CONTENT IS SACROSANCT). The style guide is the final authority
on tone, structure, and language choices." and retain only the detailed "STYLE
GUIDE OVERRIDE" callout in the PRINCIPLES section; ensure the "style_guide"
input remains declared (if needed) but without the duplicate explanatory text
and verify the "STYLE GUIDE OVERRIDE" heading fully covers the override
behavior.

---

Nitpick comments:
In `@src/core/tasks/bmad-editorial-review-prose/bmad-skill-manifest.yaml`:
- Line 1: The manifest only contains "type: skill" and should include a
"description" field for consistency with other manifests and tooling; update the
bmad-skill-manifest.yaml to add a brief "description:" entry describing the
skill and optionally include a "canonicalId:" field (noting it will be
ignored/derived) to document the intended ID—ensure the keys are added at
top-level alongside "type: skill" so tools scanning manifests or SKILL.md
frontmatter see the metadata.

In `@src/core/tasks/bmad-editorial-review-prose/SKILL.md`:
- Line 6: SKILL.md is too minimal—expand the body to document expected inputs
and outputs: add an "Inputs" section listing content (required), style_guide
(optional), reader_type (optional, default "humans"), and an "Output" section
describing the three-column Markdown table format (Original Text | Revised Text
| Changes); reference workflow.md for execution details and keep the initial
line "Follow the instructions in [workflow.md](workflow.md)" at the top so
readers see both the quick pointer and the required input/output contract.
- Line 3: The description in SKILL.md mixes capability and invocation; update
the description string (the value for description in the SKILL.md frontmatter)
to a pure capability statement such as "Clinical copy-editor that reviews text
for communication issues" and remove the invocation guidance ("Use when user
says review for prose or improve the prose"); move any invocation guidance into
the workflow/usage docs instead (document triggers in the workflow body or a
separate usage guide).

In `@src/core/tasks/bmad-editorial-review-prose/workflow.md`:
- Around line 43-45: The guidance for reader_type selection is too vague; update
the workflow.md section that defines reader_type (the 'llm' vs 'humans' branch)
to include explicit decision criteria and concrete use cases: describe scenarios
when to pick 'llm' (e.g., when output will be consumed by downstream models,
automated pipelines, or parsers requiring strict unambiguous phrasing,
consistent terminology, explicit structure, and no hedging) and when to pick
'humans' (e.g., for editorial reads, publishable prose, readability-focused
reviews, or audiences preferring natural flow and nuance), add a short checklist
or bullet criteria for each option, and include one or two brief examples per
option to make selection deterministic for users.
- Around line 39-46: Step 2: Analyze Style currently inspects "style, tone, and
voice" before the optional style_guide is consulted in Step 3; move the
style_guide consultation into Step 2 by updating the "Step 2: Analyze Style"
bullet to explicitly consult and apply the optional style_guide input (preserve
the existing notes about reader_type branching), and then simplify "Step 3" to
assume style requirements are already known and focus only on structure/content
consistency and enforcement (e.g., rename or reword Step 3 to "Step 3: Check
Structure & Compliance" or similar); update any conditional checks and wording
in the Step 3 text to remove redundant style_guide handling so the flow is: Step
2 uses style_guide, Step 3 verifies compliance.
- Around line 77-81: The HALT CONDITIONS section (header "HALT CONDITIONS" and
its three bullets mentioning empty/fewer-than-3-words, reader_type values, and
the "No editorial issues identified" message) duplicates validation already
defined in Step 1; remove this duplicate block and either delete it entirely or
replace it with a single cross-reference line such as "See Step 1 for halt
conditions" so validation is maintained in one place (update any occurrences of
the literal "No editorial issues identified" only if Step 1 changes it).
- Line 67: The table header "Brief explanation of what changed and why" is
ambiguous; update that header to enforce a specific length and format (e.g.,
"Brief explanation (one sentence, ≤140 characters)") and add a short guideline
beneath the table or in a tooltip section that clarifies "one sentence, max 140
characters or one short clause" so agents produce consistent, concise rationale
for changes; locate the header text "Brief explanation of what changed and why"
in workflow.md and replace it and add the guideline text near the table.
- Around line 71-74: The example table that currently shows only the rows "The
system will processes data and it handles errors." and "Users can chose from
options (lines 12, 45, 78)" needs additional edge-case examples: add rows
demonstrating a multi-line Original Text entry, an Original Text containing
pipes/special characters, a Revised Text left blank to represent an
uncertain/query-style suggestion, and a row showing a deduplicated issue with
multiple locations (e.g., same Original Text with a note like "appears at lines
X, Y"); update the table schema to allow a "Notes" or "Locations" column so
uncertain fixes and multiple locations can be recorded and ensure the new rows
mirror the existing columns (Original Text | Revised Text | Changes).
- Line 19: The instruction "Detect and skip code blocks, frontmatter, structural
markup" is ambiguous; update that line to explicitly list the syntaxes to skip:
fenced code blocks (```...```), indented code blocks (4-space or tab), YAML
frontmatter (---...---), and HTML/XML block tags; for inline code (`...`) only
inspect surrounding prose for clarity and do not remove or alter the inline code
itself; ensure the workflow rule named exactly "Detect and skip code blocks,
frontmatter, structural markup" is replaced with these explicit detection rules
so agents consistently identify and skip the correct constructs.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 6a3c5ce9-f154-4f3a-90ac-ef5e2cf8af7e

📥 Commits

Reviewing files that changed from the base of the PR and between 956c43f and 60ead57.

⛔ Files ignored due to path filters (1)
  • src/core/module-help.csv is excluded by !**/*.csv
📒 Files selected for processing (5)
  • src/core/tasks/bmad-editorial-review-prose/SKILL.md
  • src/core/tasks/bmad-editorial-review-prose/bmad-skill-manifest.yaml
  • src/core/tasks/bmad-editorial-review-prose/workflow.md
  • src/core/tasks/bmad-skill-manifest.yaml
  • src/core/tasks/editorial-review-prose.xml
💤 Files with no reviewable changes (2)
  • src/core/tasks/bmad-skill-manifest.yaml
  • src/core/tasks/editorial-review-prose.xml

@alexeyv alexeyv force-pushed the refactor/convert-editorial-review-prose branch from 60ead57 to 8084fe4 Compare March 11, 2026 10:58
alexeyv and others added 3 commits March 11, 2026 05:49
…directory

Replace single-file XML task with standard skill directory structure
(SKILL.md + workflow.md + bmad-skill-manifest.yaml). Update parent
manifest and module-help.csv references. No behavioral changes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…orkflow

Review found missing MANDATORY step-order enforcement and critical marker
on Step 3, which were present in the original XML and the reference pattern
(editorial-review-structure). Adds both to match established conventions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Rename EXECUTION heading to STEPS and update the role instruction
to reference the correct section name.
@alexeyv alexeyv force-pushed the refactor/convert-editorial-review-prose branch from d314263 to f5af65a Compare March 11, 2026 11:49
@alexeyv alexeyv merged commit 3339f24 into bmad-code-org:main Mar 11, 2026
5 checks passed
@alexeyv alexeyv deleted the refactor/convert-editorial-review-prose branch March 11, 2026 12:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant