feat: add editorial review tasks for structure and prose#1307
feat: add editorial review tasks for structure and prose#1307bmadcode merged 5 commits intobmad-code-org:mainfrom
Conversation
f772388 to
fcf26a1
Compare
Add two complementary editorial review tasks: - editorial-review-structure.xml: Structural editor that proposes cuts, reorganization, and simplification. Includes 5 document archetype models (Tutorial, Reference, Explanation, Prompt, Strategic) for targeted evaluation. - editorial-review-prose.xml: Clinical copy-editor for prose improvements using Microsoft Writing Style Guide as baseline. Both tasks support humans and llm target audiences with different principles.
fcf26a1 to
917a32f
Compare
📝 WalkthroughWalkthroughThis change introduces two new editorial review tasks with XML workflow definitions for prose copy-editing and structural document editing. It adds governance constraints to an existing adversarial review task and extends the CIS module configuration to expose additional core config variables for downstream use. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
src/core/tasks/editorial-review-structure.xml (1)
17-22: Inconsistent parameter naming between editorial tasks.This task uses
reader_type(lines 20-22) for the humans/llm distinction, whileeditorial-review-prose.xmlusestarget_audiencefor the same purpose. This inconsistency may confuse users invoking both tasks in sequence (structure → prose as intended per line 10's objective).Consider aligning the naming:
- Use
reader_typein both tasks for the humans/llm optimization flag- Keep
target_audiencefor the audience description (who reads the doc) as used here
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/core/tasks/editorial-review-prose.xmlsrc/core/tasks/editorial-review-structure.xmlsrc/core/tasks/review-adversarial-general.xmlsrc/modules/cis/module.yaml
🧰 Additional context used
📓 Path-based instructions (1)
**/*
⚙️ CodeRabbit configuration file
**/*: Focus on inconsistencies, contradictions, edge cases and serious issues.
Avoid commenting on minor issues such as linting, formatting and style issues.
When providing code suggestions, use GitHub's suggestion format:<code changes>
Files:
src/core/tasks/editorial-review-structure.xmlsrc/core/tasks/editorial-review-prose.xmlsrc/core/tasks/review-adversarial-general.xmlsrc/modules/cis/module.yaml
🔇 Additional comments (5)
src/modules/cis/module.yaml (1)
7-12: LGTM!The addition of
document_output_languageandoutput_foldervariables follows the existing pattern and expands the core-config accessibility for downstream tasks, which aligns with the new editorial tasks being introduced.src/core/tasks/review-adversarial-general.xml (1)
12-16: LGTM!The governance policy additions are clear and consistent with the existing task structure. The halt-conditions section (lines 41-44) properly backs the "HALT immediately when halt-conditions are met" instruction.
src/core/tasks/editorial-review-prose.xml (1)
1-88: LGTM!The task definition is well-structured with consistent terminology between inputs, flow actions, and halt-conditions. The
target_audienceparameter is properly validated and used throughout the flow. The deduplication and merging principles (lines 29-30) are appropriately reflected in the editorial review step (lines 58-59). The output format with the three-column table is clearly specified with a helpful example.src/core/tasks/editorial-review-structure.xml (2)
73-103: Well-designed structure models.The five document archetypes (Tutorial, Reference, Explanation, Prompt, Strategic) provide clear evaluation criteria. The MECE principle (Mutually Exclusive, Collectively Exhaustive) mentioned for Reference and Strategic models is a good structural validation approach.
147-160: Good categorization system for recommendations.The CUT/MERGE/MOVE/CONDENSE/QUESTION/PRESERVE categories provide actionable, non-overlapping classification. The explicit PRESERVE category for elements that "might seem cuttable but serve comprehension" is a thoughtful addition that prevents over-aggressive editing, especially given the human-reader-principles guidance.
Both editorial review tasks (prose and structure) were missing the key constraint that reviewers should never challenge the ideas/knowledge themselves—only how clearly they are communicated. This restores the original design intent. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Prose task was using 'target_audience' for the humans/llm optimization flag while structure task correctly separates 'target_audience' (who reads) from 'reader_type' (optimization mode). Aligns to reader_type. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Summary
Adds two complementary editorial review tasks for document improvement:
Both tasks support
humansandllmtarget audiences with different review principles.Features
Structure Task
Prose Task
Test plan