Conversation
Add Cursor plugin alongside existing Claude Code plugin using the single-plugin format from cursor/plugin-template. The repo now serves both platforms: - .claude-plugin/ → Claude Code - .cursor-plugin/ → Cursor Components: - 9 enriched skills with triggers, workflows, commands, and guardrails - 3 agents (planner, reviewer, scout) with Cursor-compatible frontmatter - 6 rules (.mdc format) for quality gates, context discipline, etc. - SVG logo asset
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughAdds a Cursor plugin manifest and a collection of documentation and rule/skill files: agent docs updates, six new rule files, multiple new skill guides, and a README overhaul reflecting the Pro Workflow plugin layout and metadata. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Keep tools restrictions (planner=read-only, reviewer/scout=read+bash) so Claude Code agents work correctly. Remove model field entirely — both Cursor and Claude Code will use their platform defaults.
Add Cursor installation section, skills table, rules table, and update structure to reflect dual-platform support (Claude Code + Cursor).
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (8)
rules/no-debug-statements.mdc (1)
10-13: Go and Rust logging framework exemptions are missing from the exceptions clause.The exemptions on Line 13 name JS and Python logging libraries but leave Go and Rust without equivalents. A developer removing
fmt.Println()from Go code has no rule-specified replacement to reach for, which makes the rule inconsistent across the languages it covers.♻️ Suggested addition
-Exceptions: Legitimate logging using a logging framework (winston, pino, logging module) is fine. +Exceptions: Legitimate logging using a logging framework is fine: +- JavaScript/TypeScript: winston, pino, bunyan +- Python: logging module, loguru +- Go: log/slog, go.uber.org/zap, zerolog, logrus +- Rust: log + env_logger, tracing, slog🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@rules/no-debug-statements.mdc` around lines 10 - 13, The exceptions list currently mentions JS and Python logging libraries but omits Go and Rust equivalents; update the rule text near the exceptions clause to add common logging frameworks for Go (e.g., the standard log package and popular libs like logrus, zap) and for Rust (e.g., the log crate and tracing) so that uses of fmt.Println, dbg!(), and println!() for debugging are disallowed while legitimate logging via log/ logrus/ zap (Go) and log/ tracing (Rust) are explicitly allowed; reference the symbols fmt.Println, dbg!, println!, and the "Exceptions" clause when making the change.skills/smart-commit/SKILL.md (1)
27-29: Quality gates are npm-only — consider making them language-agnostic.The broader plugin covers Python, Go, and Rust (see
rules/no-debug-statements.mdc), but the quality-gate commands are hard-coded to npm. Users on other stacks will hit "command not found" errors and likely skip the gates entirely.Consider a comment indicating that these are example commands and should be adapted per project, or offer conditional snippets per language.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/smart-commit/SKILL.md` around lines 27 - 29, The quality-gate snippets currently hard-code npm commands ("npm run lint", "npm run typecheck", "npm test -- --changed --passWithNoTests"), which will fail for non-JS projects; update SKILL.md to mark these lines as example/npm-specific commands and either (a) add a short comment above them stating "Example commands — adapt per language (Python/Go/Rust/etc.)" or (b) provide conditional alternative snippets for other ecosystems (e.g., flake8/mypy/pytest for Python, golangci-lint/go test for Go, cargo clippy/cargo test for Rust), so readers know to replace the npm commands for their stack..cursor-plugin/plugin.json (1)
4-4: Starting version at1.2.0is unconventional for a new plugin.Typically a first release starts at
0.1.0or1.0.0. If this is intentionally aligned with the companion Claude Code plugin's version, add a comment in the README or CHANGELOG documenting this convention so it doesn't appear accidental to marketplace consumers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.cursor-plugin/plugin.json at line 4, The "version" key in .cursor-plugin/plugin.json is set to "1.2.0" which is unconventional for a new plugin; either change the value of the "version" field to a conventional initial release (e.g., "0.1.0" or "1.0.0") in plugin.json, or if "1.2.0" is intentional to align with the companion Claude Code plugin, add an explicit note in the README or CHANGELOG explaining the versioning convention so consumers understand this is deliberate (reference the "version" property in plugin.json when making the change).skills/learn-rule/SKILL.md (1)
23-27: Add language specifiers to fenced code blocks (MD040).Both blocks contain plaintext output; specifying
textsatisfies markdownlint and prevents renderers from attempting syntax highlighting.♻️ Proposed fix
-``` +```text [LEARN] Category: One-line rule Mistake: What went wrong Correction: How it was fixed```diff -``` +```text Recent mistake: Edited wrong utils.ts file ...</details> Also applies to: 44-50 <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@skills/learn-rule/SKILL.mdaround lines 23 - 27, The markdown fenced code
blocks in SKILL.md are missing language specifiers which triggers MD040; update
the two plaintext fences that contain "[LEARN] Category: One-line rule /
Mistake: ... Correction: ..." and the block starting "Recent mistake: Edited
wrong utils.ts file" to use a plaintext language tag (e.g., ```text) so
markdownlint passes and renderers don't attempt syntax highlighting.</details> </blockquote></details> <details> <summary>skills/replay-learnings/SKILL.md (1)</summary><blockquote> `24-42`: **Add language specifier to the output example block (MD040).** The fenced block at line 24 is missing a language tag; use `text` for this plaintext output template. <details> <summary>♻️ Proposed fix</summary> ```diff -``` +```text REPLAY BRIEFING: <task> ======================= ... ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@skills/replay-learnings/SKILL.mdaround lines 24 - 42, The fenced code block
containing the REPLAY BRIEFING example is missing a language tag; update the
triple-backtick opener for that block (the block starting with "REPLAY BRIEFING:
" in SKILL.md) to include the language specifiertextso the fence readsskills/insights/SKILL.md (1)
18-18: Addtextlanguage specifiers to all four output example blocks (MD040).Lines 18, 29, 48, and 68 each open a fenced block without a language tag. All contain plaintext output; adding
textsuppresses the markdownlint warnings and avoids unintended syntax highlighting.♻️ Proposed fix (apply to all four openings)
-``` +```text Session Insights ...Also applies to: 29-29, 48-48, 68-68
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/insights/SKILL.md` at line 18, In SKILL.md the four fenced output example blocks (including the one that begins with "Session Insights") are plain text but open with ``` (no language) which triggers MD040; update each opening fence to ```text so all four plaintext example blocks use the text language specifier and suppress markdownlint warnings.agents/planner.md (1)
3-13: Threshold mismatch between thedescriptionfield and the## Triggerbody.Line 3 (
description) surfaces>5 filesas the file-count signal but omits the>10 tool callsheuristic. Line 13 (## Trigger) lists>10 tool calls expectedbut drops the>5 filescondition. Users who see only the plugin description (e.g., in Cursor's skill picker) get an incomplete picture, while the body text gives a different set of signals.Consider aligning them:
♻️ Proposed fix
-description: Break down complex tasks into implementation plans before writing code. Use when task touches >5 files, requires architecture decisions, or has unclear requirements. +description: Break down complex tasks into implementation plans before writing code. Use when task touches >5 files, requires >10 tool calls, requires architecture decisions, or has unclear requirements.-Use when multi-file changes, architecture decisions, unclear requirements, or >10 tool calls expected. +Use when task touches >5 files, architecture decisions, unclear requirements, or >10 tool calls expected.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agents/planner.md` around lines 3 - 13, The description field and the Planner trigger text disagree on thresholds—'description' currently lists ">5 files" while the "## Trigger" body lists ">10 tool calls expected" but omits the files condition; update them to match by choosing a single heuristic set and applying it to both places (e.g., include both ">5 files" and ">10 tool calls expected" in the top-level description string and ensure the "## Trigger" paragraph for Planner also lists both signals), referencing the 'description' field and the "## Trigger" section in the Planner doc so users see the same criteria everywhere.agents/reviewer.md (1)
21-21: Optional: restore unicode superscript forO(n²)
O(n^2)renders as plain text in GitHub-flavoured Markdown and most Cursor renderers; the priorO(n²)renders the proper superscript everywhere without requiring any Markdown extensions.📝 One-character fix
-5. **Performance** - O(n^2) loops, memory? +5. **Performance** - O(n²) loops, memory?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agents/reviewer.md` at line 21, Replace the ASCII superscript in the performance note by changing the plain "O(n^2)" string to use the Unicode superscript "O(n²)"; locate the occurrence of "O(n^2)" in agents/reviewer.md (the performance bullet or heading) and update it so the rendered text shows the proper superscript everywhere.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@agents/scout.md`:
- Line 4: The tools list currently includes "Bash" which conflicts with the
agent rule "Never edit files. Read-only exploration."; update the tools
declaration (the tools array containing "Read", "Glob", "Grep", "Bash") so it
enforces read-only behavior—either remove "Bash" from the tools array entirely
and rely on "Read"/"Glob"/"Grep", or keep "Bash" but explicitly document and
enforce a whitelist of allowed read-only shell operations (e.g., only use
non-writing commands like cat/find/grep) in the agent rules so the intent and
capability are consistent.
- Around line 19-21: Workflow step 5 currently says "Identify what's missing,
gather more context, re-score" with no stopping condition; update Workflow step
5 to include the two-round escalation constraint from the Rules section ("If
still < 70 after 2 rounds, escalate to user") so the workflow is self-contained
— explicitly track and limit re-score attempts to 2 rounds and escalate to the
user if the score remains <70 after those attempts (refer to "Workflow step 5"
and the Rules text "If still < 70 after 2 rounds, escalate to user").
In `@rules/atomic-commits.mdc`:
- Line 14: Update the guidance that currently instructs running "git diff
--stat" to verify staged changes: change the command to "git diff --staged
--stat" (or mention the alias "git diff --cached --stat") so it inspects the
index (staged content) rather than unstaged working-tree changes; edit the
sentence in rules/atomic-commits.mdc that references git diff --stat to use the
corrected command and optionally include the alias for clarity.
In `@skills/learn-rule/SKILL.md`:
- Line 19: Replace the ambiguous "Persist to LEARNED section or project memory"
with a clear canonical target: always append approved learnings to the
repository's LEARNED.md file (root-level), not to arbitrary project memory;
update the SKILL.md instruction to state the exact flow: on user correction
acknowledge the correction, propose a short addition to LEARNED.md, and only
append that entry to LEARNED.md after explicit user approval. Ensure the wording
in SKILL.md references "LEARNED.md" and the approve-then-append behavior so
different sessions use the same canonical storage.
In `@skills/smart-commit/SKILL.md`:
- Around line 47-51: The fenced code block in the commit-message template
missing a language specifier is triggering MD040; update the commit-message
template fence in SKILL.md so the opening triple-backtick includes a language
tag (e.g., "text" or "plaintext") to mark it as a plain text block and satisfy
the linter.
---
Nitpick comments:
In @.cursor-plugin/plugin.json:
- Line 4: The "version" key in .cursor-plugin/plugin.json is set to "1.2.0"
which is unconventional for a new plugin; either change the value of the
"version" field to a conventional initial release (e.g., "0.1.0" or "1.0.0") in
plugin.json, or if "1.2.0" is intentional to align with the companion Claude
Code plugin, add an explicit note in the README or CHANGELOG explaining the
versioning convention so consumers understand this is deliberate (reference the
"version" property in plugin.json when making the change).
In `@agents/planner.md`:
- Around line 3-13: The description field and the Planner trigger text disagree
on thresholds—'description' currently lists ">5 files" while the "## Trigger"
body lists ">10 tool calls expected" but omits the files condition; update them
to match by choosing a single heuristic set and applying it to both places
(e.g., include both ">5 files" and ">10 tool calls expected" in the top-level
description string and ensure the "## Trigger" paragraph for Planner also lists
both signals), referencing the 'description' field and the "## Trigger" section
in the Planner doc so users see the same criteria everywhere.
In `@agents/reviewer.md`:
- Line 21: Replace the ASCII superscript in the performance note by changing the
plain "O(n^2)" string to use the Unicode superscript "O(n²)"; locate the
occurrence of "O(n^2)" in agents/reviewer.md (the performance bullet or heading)
and update it so the rendered text shows the proper superscript everywhere.
In `@rules/no-debug-statements.mdc`:
- Around line 10-13: The exceptions list currently mentions JS and Python
logging libraries but omits Go and Rust equivalents; update the rule text near
the exceptions clause to add common logging frameworks for Go (e.g., the
standard log package and popular libs like logrus, zap) and for Rust (e.g., the
log crate and tracing) so that uses of fmt.Println, dbg!(), and println!() for
debugging are disallowed while legitimate logging via log/ logrus/ zap (Go) and
log/ tracing (Rust) are explicitly allowed; reference the symbols fmt.Println,
dbg!, println!, and the "Exceptions" clause when making the change.
In `@skills/insights/SKILL.md`:
- Line 18: In SKILL.md the four fenced output example blocks (including the one
that begins with "Session Insights") are plain text but open with ``` (no
language) which triggers MD040; update each opening fence to ```text so all four
plaintext example blocks use the text language specifier and suppress
markdownlint warnings.
In `@skills/learn-rule/SKILL.md`:
- Around line 23-27: The markdown fenced code blocks in SKILL.md are missing
language specifiers which triggers MD040; update the two plaintext fences that
contain "[LEARN] Category: One-line rule / Mistake: ... Correction: ..." and the
block starting "Recent mistake: Edited wrong utils.ts file" to use a plaintext
language tag (e.g., ```text) so markdownlint passes and renderers don't attempt
syntax highlighting.
In `@skills/replay-learnings/SKILL.md`:
- Around line 24-42: The fenced code block containing the REPLAY BRIEFING
example is missing a language tag; update the triple-backtick opener for that
block (the block starting with "REPLAY BRIEFING: <task>" in SKILL.md) to include
the language specifier `text` so the fence reads ```text, ensuring the Markdown
linter MD040 is satisfied.
In `@skills/smart-commit/SKILL.md`:
- Around line 27-29: The quality-gate snippets currently hard-code npm commands
("npm run lint", "npm run typecheck", "npm test -- --changed
--passWithNoTests"), which will fail for non-JS projects; update SKILL.md to
mark these lines as example/npm-specific commands and either (a) add a short
comment above them stating "Example commands — adapt per language
(Python/Go/Rust/etc.)" or (b) provide conditional alternative snippets for other
ecosystems (e.g., flake8/mypy/pytest for Python, golangci-lint/go test for Go,
cargo clippy/cargo test for Rust), so readers know to replace the npm commands
for their stack.
| name: scout | ||
| description: Confidence-gated exploration agent that assesses readiness before implementation | ||
| description: Confidence-gated exploration that assesses readiness before implementation. Scores 0-100 across five dimensions and gives GO/HOLD verdict. | ||
| tools: ["Read", "Glob", "Grep", "Bash"] |
There was a problem hiding this comment.
Bash in tools contradicts the "read-only exploration" rule
Line 4 grants the agent Bash, which can execute arbitrary write operations (> file, sed -i, tee, etc.). Line 52 declares "Never edit files. Read-only exploration." The rule is a soft guardrail only; the tool capability remains unrestricted. If read-only is the hard intent, remove Bash from tools (shell-based reads like find/cat can be replaced by Glob/Read), or explicitly enumerate the permitted Bash operations in the rule.
📝 Option: remove Bash for strict read-only
-tools: ["Read", "Glob", "Grep", "Bash"]
+tools: ["Read", "Glob", "Grep"]Also applies to: 52-52
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agents/scout.md` at line 4, The tools list currently includes "Bash" which
conflicts with the agent rule "Never edit files. Read-only exploration."; update
the tools declaration (the tools array containing "Read", "Glob", "Grep",
"Bash") so it enforces read-only behavior—either remove "Bash" from the tools
array entirely and rely on "Read"/"Glob"/"Grep", or keep "Bash" but explicitly
document and enforce a whitelist of allowed read-only shell operations (e.g.,
only use non-writing commands like cat/find/grep) in the agent rules so the
intent and capability are consistent.
| 3. Score confidence (0-100) | ||
| 4. If >= 70: GO with findings | ||
| 5. If < 70: Identify what's missing, gather more context, re-score |
There was a problem hiding this comment.
Workflow step 5 omits the 2-round escalation constraint defined in Rules
The Workflow at step 5 only says "Identify what's missing, gather more context, re-score," with no upper bound. The Rules section (line 54) adds the critical constraint: "If still < 70 after 2 rounds, escalate to user." An agent following the Workflow section alone has no stopping condition and could loop indefinitely. Embed the bound directly in the Workflow step to make the constraint self-contained.
📝 Proposed fix
-5. If < 70: Identify what's missing, gather more context, re-score
+5. If < 70: Identify what's missing, gather more context, re-score (max 2 rounds; escalate to user if still < 70)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@agents/scout.md` around lines 19 - 21, Workflow step 5 currently says
"Identify what's missing, gather more context, re-score" with no stopping
condition; update Workflow step 5 to include the two-round escalation constraint
from the Rules section ("If still < 70 after 2 rounds, escalate to user") so the
workflow is self-contained — explicitly track and limit re-score attempts to 2
rounds and escalate to the user if the score remains <70 after those attempts
(refer to "Workflow step 5" and the Rules text "If still < 70 after 2 rounds,
escalate to user").
|
|
||
| Always work on feature branches. Never commit directly to main. | ||
|
|
||
| Review changes before pushing. Run `git diff --stat` to verify what's staged. |
There was a problem hiding this comment.
git diff --stat shows unstaged changes, not staged ones.
The guidance says to run git diff --stat to "verify what's staged," but this command diffs the working tree against the index — it shows unstaged changes. Users following this rule would be looking at the wrong set of files before a push, which is the exact scenario the guardrail is trying to prevent.
The correct command to inspect staged content is git diff --staged --stat (alias: git diff --cached --stat).
🔧 Proposed fix
-Review changes before pushing. Run `git diff --stat` to verify what's staged.
+Review changes before pushing. Run `git diff --staged --stat` to verify what's staged.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Review changes before pushing. Run `git diff --stat` to verify what's staged. | |
| Review changes before pushing. Run `git diff --staged --stat` to verify what's staged. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@rules/atomic-commits.mdc` at line 14, Update the guidance that currently
instructs running "git diff --stat" to verify staged changes: change the command
to "git diff --staged --stat" (or mention the alias "git diff --cached --stat")
so it inspects the index (staged content) rather than unstaged working-tree
changes; edit the sentence in rules/atomic-commits.mdc that references git diff
--stat to use the corrected command and optionally include the alias for
clarity.
| 1. Identify the lesson — what mistake was made? What should happen instead? | ||
| 2. Format the rule with full context. | ||
| 3. Propose the addition and wait for user approval. | ||
| 4. After approval, persist to LEARNED section or project memory. |
There was a problem hiding this comment.
Ambiguous persistence target may lead to inconsistent storage of learnings.
"Persist to LEARNED section or project memory" does not identify a canonical file. Based on learnings, the established convention is LEARNED.md. Without a concrete target, different AI sessions may store learnings to different backends, breaking the compounding learning model the skill is designed to support.
✏️ Proposed fix
-4. After approval, persist to LEARNED section or project memory.
+4. After approval, append to `LEARNED.md` (or the project's designated learning file).Based on learnings: "When user corrects you, acknowledge the correction, propose an addition to LEARNED.md, and apply it after approval."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 4. After approval, persist to LEARNED section or project memory. | |
| 4. After approval, append to `LEARNED.md` (or the project's designated learning file). |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/learn-rule/SKILL.md` at line 19, Replace the ambiguous "Persist to
LEARNED section or project memory" with a clear canonical target: always append
approved learnings to the repository's LEARNED.md file (root-level), not to
arbitrary project memory; update the SKILL.md instruction to state the exact
flow: on user correction acknowledge the correction, propose a short addition to
LEARNED.md, and only append that entry to LEARNED.md after explicit user
approval. Ensure the wording in SKILL.md references "LEARNED.md" and the
approve-then-append behavior so different sessions use the same canonical
storage.
| ``` | ||
| <type>(<scope>): <short summary> | ||
|
|
||
| <body - what changed and why> | ||
| ``` |
There was a problem hiding this comment.
Add language specifier to the fenced code block.
The commit-message template fence at Line 47 has no language specified, triggering MD040. Use text or plaintext.
🔧 Proposed fix
-```
+```text
<type>(<scope>): <short summary>
<body - what changed and why>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ``` | |
| <type>(<scope>): <short summary> | |
| <body - what changed and why> | |
| ``` |
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 47-47: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/smart-commit/SKILL.md` around lines 47 - 51, The fenced code block in
the commit-message template missing a language specifier is triggering MD040;
update the commit-message template fence in SKILL.md so the opening
triple-backtick includes a language tag (e.g., "text" or "plaintext") to mark it
as a plain text block and satisfy the linter.
Summary
.claude-plugin/for Claude Code,.cursor-plugin/for CursorComponents
Skills (9)
pro-workflowsmart-commitwrap-uplearn-ruleparallel-worktreesreplay-learningssession-handoffinsightsdeslopAgents (3)
plannerreviewerscoutRules (6)
quality-gatesatomic-commitscontext-disciplineself-correctionno-debug-statementscommunication-styleTest plan
.cursor-plugin/plugin.jsonvalidates against Cursor plugin schema/add-plugin.claude-plugin/)Summary by CodeRabbit
New Features
Rules
Documentation