Skip to content

feat: add Cursor marketplace plugin#17

Merged
rohitg00 merged 4 commits intomainfrom
feat/cursor-plugin
Feb 19, 2026
Merged

feat: add Cursor marketplace plugin#17
rohitg00 merged 4 commits intomainfrom
feat/cursor-plugin

Conversation

@rohitg00
Copy link
Owner

@rohitg00 rohitg00 commented Feb 19, 2026

Summary

  • Adds Cursor marketplace plugin alongside existing Claude Code plugin using the single-plugin format
  • The repo now serves both platforms: .claude-plugin/ for Claude Code, .cursor-plugin/ for Cursor
  • 9 enriched skills, 3 agents, 6 rules, SVG logo

Components

Skills (9)

Skill Description
pro-workflow Core 8 workflow patterns for AI-assisted coding
smart-commit Quality gates, staged review, and conventional commits
wrap-up End-of-session ritual with change audit and learning capture
learn-rule Capture corrections as persistent learning rules
parallel-worktrees Set up git worktrees for zero dead time
replay-learnings Surface past learnings relevant to the current task
session-handoff Generate handoff documents for session continuity
insights Session analytics, correction trends, and productivity metrics
deslop Remove AI-generated code slop and clean up style

Agents (3)

Agent Description
planner Break down complex tasks into implementation plans
reviewer Code review for logic, security, and quality
scout Confidence-gated exploration with 0-100 scoring

Rules (6)

Rule Description
quality-gates Lint, typecheck, and test before commits
atomic-commits Conventional format, feature branches, specific staging
context-discipline Read before edit, plan before multi-file changes
self-correction Capture mistakes as compounding learnings
no-debug-statements Remove console.log, debugger, print before committing
communication-style Concise, action-oriented, no over-engineering

Test plan

  • Verify .cursor-plugin/plugin.json validates against Cursor plugin schema
  • Test plugin install in Cursor via /add-plugin
  • Verify skills, agents, and rules load correctly in Cursor
  • Confirm existing Claude Code plugin still works (.claude-plugin/)

Summary by CodeRabbit

  • New Features

    • Added Pro Workflow plugin manifest and several new skills (insights, smart-commit, replay-learnings, session-handoff, parallel worktrees, deslop, learn-rule, wrap-up)
  • Rules

    • Introduced new operational policies: atomic commits, quality gates, no-debug-statements, communication style, context discipline, and self-correction
  • Documentation

    • Revised planner, reviewer, and scout guidance and overhauled README to reflect new workflows and installation/build instructions

Add Cursor plugin alongside existing Claude Code plugin using the
single-plugin format from cursor/plugin-template. The repo now serves
both platforms:
- .claude-plugin/ → Claude Code
- .cursor-plugin/ → Cursor

Components:
- 9 enriched skills with triggers, workflows, commands, and guardrails
- 3 agents (planner, reviewer, scout) with Cursor-compatible frontmatter
- 6 rules (.mdc format) for quality gates, context discipline, etc.
- SVG logo asset
@coderabbitai
Copy link

coderabbitai bot commented Feb 19, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

Adds a Cursor plugin manifest and a collection of documentation and rule/skill files: agent docs updates, six new rule files, multiple new skill guides, and a README overhaul reflecting the Pro Workflow plugin layout and metadata.

Changes

Cohort / File(s) Summary
Plugin Manifest
.cursor-plugin/plugin.json
Adds a new plugin manifest with metadata and paths to skills, agents, and rules for the Pro Workflow extension.
Agent Docs
agents/planner.md, agents/reviewer.md, agents/scout.md
Rewrites and restructures agent documentation: renames headers, replaces "When to Use" with "Trigger", consolidates outputs, and replaces "NEVER" sections with explicit "Rules".
Rules
rules/atomic-commits.mdc, rules/communication-style.mdc, rules/context-discipline.mdc, rules/no-debug-statements.mdc, rules/quality-gates.mdc, rules/self-correction.mdc
Adds six new declarative rule files covering commit conventions, communication style, context discipline, debug-statement removal, pre-commit quality gates, and a self-correction protocol.
Skills (new)
skills/deslop/SKILL.md, skills/insights/SKILL.md, skills/learn-rule/SKILL.md, skills/parallel-worktrees/SKILL.md, skills/replay-learnings/SKILL.md, skills/session-handoff/SKILL.md, skills/smart-commit/SKILL.md, skills/wrap-up/SKILL.md
Adds eight new skill guides describing workflows for slop removal, session analytics, learning capture, parallel worktrees, replaying learnings, session handoffs, smart commits, and wrap-up rituals.
Skills (update)
skills/pro-workflow/SKILL.md
Removes the tools front-matter field from the existing pro-workflow skill metadata (single-line deletion).
Repository README
README.md
Large rewrite: reorganized installation/build instructions, plugin loading for Cursor/Claude Code, updated structure/diagrams, and expanded Skills/Rules listing and narrative.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hopped through rules and skills tonight,

plugin manifest shining bright.
Plans, scouts, reviewers all in line,
commits so tidy, checkpoints fine.
Carrots, learnings — one small bite. 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add Cursor marketplace plugin' accurately reflects the primary objective: introducing a new Cursor plugin alongside the existing Claude Code plugin, as confirmed by the substantial additions to the .cursor-plugin/ directory and comprehensive new documentation files.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/cursor-plugin

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Keep tools restrictions (planner=read-only, reviewer/scout=read+bash)
so Claude Code agents work correctly. Remove model field entirely —
both Cursor and Claude Code will use their platform defaults.
Add Cursor installation section, skills table, rules table, and update
structure to reflect dual-platform support (Claude Code + Cursor).
@rohitg00 rohitg00 merged commit 1f6e7f6 into main Feb 19, 2026
4 checks passed
@rohitg00 rohitg00 deleted the feat/cursor-plugin branch February 19, 2026 18:23
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (8)
rules/no-debug-statements.mdc (1)

10-13: Go and Rust logging framework exemptions are missing from the exceptions clause.

The exemptions on Line 13 name JS and Python logging libraries but leave Go and Rust without equivalents. A developer removing fmt.Println() from Go code has no rule-specified replacement to reach for, which makes the rule inconsistent across the languages it covers.

♻️ Suggested addition
-Exceptions: Legitimate logging using a logging framework (winston, pino, logging module) is fine.
+Exceptions: Legitimate logging using a logging framework is fine:
+- JavaScript/TypeScript: winston, pino, bunyan
+- Python: logging module, loguru
+- Go: log/slog, go.uber.org/zap, zerolog, logrus
+- Rust: log + env_logger, tracing, slog
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rules/no-debug-statements.mdc` around lines 10 - 13, The exceptions list
currently mentions JS and Python logging libraries but omits Go and Rust
equivalents; update the rule text near the exceptions clause to add common
logging frameworks for Go (e.g., the standard log package and popular libs like
logrus, zap) and for Rust (e.g., the log crate and tracing) so that uses of
fmt.Println, dbg!(), and println!() for debugging are disallowed while
legitimate logging via log/ logrus/ zap (Go) and log/ tracing (Rust) are
explicitly allowed; reference the symbols fmt.Println, dbg!, println!, and the
"Exceptions" clause when making the change.
skills/smart-commit/SKILL.md (1)

27-29: Quality gates are npm-only — consider making them language-agnostic.

The broader plugin covers Python, Go, and Rust (see rules/no-debug-statements.mdc), but the quality-gate commands are hard-coded to npm. Users on other stacks will hit "command not found" errors and likely skip the gates entirely.

Consider a comment indicating that these are example commands and should be adapted per project, or offer conditional snippets per language.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/smart-commit/SKILL.md` around lines 27 - 29, The quality-gate snippets
currently hard-code npm commands ("npm run lint", "npm run typecheck", "npm test
-- --changed --passWithNoTests"), which will fail for non-JS projects; update
SKILL.md to mark these lines as example/npm-specific commands and either (a) add
a short comment above them stating "Example commands — adapt per language
(Python/Go/Rust/etc.)" or (b) provide conditional alternative snippets for other
ecosystems (e.g., flake8/mypy/pytest for Python, golangci-lint/go test for Go,
cargo clippy/cargo test for Rust), so readers know to replace the npm commands
for their stack.
.cursor-plugin/plugin.json (1)

4-4: Starting version at 1.2.0 is unconventional for a new plugin.

Typically a first release starts at 0.1.0 or 1.0.0. If this is intentionally aligned with the companion Claude Code plugin's version, add a comment in the README or CHANGELOG documenting this convention so it doesn't appear accidental to marketplace consumers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.cursor-plugin/plugin.json at line 4, The "version" key in
.cursor-plugin/plugin.json is set to "1.2.0" which is unconventional for a new
plugin; either change the value of the "version" field to a conventional initial
release (e.g., "0.1.0" or "1.0.0") in plugin.json, or if "1.2.0" is intentional
to align with the companion Claude Code plugin, add an explicit note in the
README or CHANGELOG explaining the versioning convention so consumers understand
this is deliberate (reference the "version" property in plugin.json when making
the change).
skills/learn-rule/SKILL.md (1)

23-27: Add language specifiers to fenced code blocks (MD040).

Both blocks contain plaintext output; specifying text satisfies markdownlint and prevents renderers from attempting syntax highlighting.

♻️ Proposed fix
-```
+```text
 [LEARN] Category: One-line rule
 Mistake: What went wrong
 Correction: How it was fixed

```diff
-```
+```text
 Recent mistake: Edited wrong utils.ts file
 ...
</details>


Also applies to: 44-50

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @skills/learn-rule/SKILL.md around lines 23 - 27, The markdown fenced code
blocks in SKILL.md are missing language specifiers which triggers MD040; update
the two plaintext fences that contain "[LEARN] Category: One-line rule /
Mistake: ... Correction: ..." and the block starting "Recent mistake: Edited
wrong utils.ts file" to use a plaintext language tag (e.g., ```text) so
markdownlint passes and renderers don't attempt syntax highlighting.


</details>

</blockquote></details>
<details>
<summary>skills/replay-learnings/SKILL.md (1)</summary><blockquote>

`24-42`: **Add language specifier to the output example block (MD040).**

The fenced block at line 24 is missing a language tag; use `text` for this plaintext output template.


<details>
<summary>♻️ Proposed fix</summary>

```diff
-```
+```text
 REPLAY BRIEFING: <task>
 =======================
 ...
 ```
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @skills/replay-learnings/SKILL.md around lines 24 - 42, The fenced code block
containing the REPLAY BRIEFING example is missing a language tag; update the
triple-backtick opener for that block (the block starting with "REPLAY BRIEFING:
" in SKILL.md) to include the language specifier text so the fence reads

skills/insights/SKILL.md (1)

18-18: Add text language specifiers to all four output example blocks (MD040).

Lines 18, 29, 48, and 68 each open a fenced block without a language tag. All contain plaintext output; adding text suppresses the markdownlint warnings and avoids unintended syntax highlighting.

♻️ Proposed fix (apply to all four openings)
-```
+```text
 Session Insights
 ...

Also applies to: 29-29, 48-48, 68-68

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/insights/SKILL.md` at line 18, In SKILL.md the four fenced output
example blocks (including the one that begins with "Session Insights") are plain
text but open with ``` (no language) which triggers MD040; update each opening
fence to ```text so all four plaintext example blocks use the text language
specifier and suppress markdownlint warnings.
agents/planner.md (1)

3-13: Threshold mismatch between the description field and the ## Trigger body.

Line 3 (description) surfaces >5 files as the file-count signal but omits the >10 tool calls heuristic. Line 13 (## Trigger) lists >10 tool calls expected but drops the >5 files condition. Users who see only the plugin description (e.g., in Cursor's skill picker) get an incomplete picture, while the body text gives a different set of signals.

Consider aligning them:

♻️ Proposed fix
-description: Break down complex tasks into implementation plans before writing code. Use when task touches >5 files, requires architecture decisions, or has unclear requirements.
+description: Break down complex tasks into implementation plans before writing code. Use when task touches >5 files, requires >10 tool calls, requires architecture decisions, or has unclear requirements.
-Use when multi-file changes, architecture decisions, unclear requirements, or >10 tool calls expected.
+Use when task touches >5 files, architecture decisions, unclear requirements, or >10 tool calls expected.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agents/planner.md` around lines 3 - 13, The description field and the Planner
trigger text disagree on thresholds—'description' currently lists ">5 files"
while the "## Trigger" body lists ">10 tool calls expected" but omits the files
condition; update them to match by choosing a single heuristic set and applying
it to both places (e.g., include both ">5 files" and ">10 tool calls expected"
in the top-level description string and ensure the "## Trigger" paragraph for
Planner also lists both signals), referencing the 'description' field and the
"## Trigger" section in the Planner doc so users see the same criteria
everywhere.
agents/reviewer.md (1)

21-21: Optional: restore unicode superscript for O(n²)

O(n^2) renders as plain text in GitHub-flavoured Markdown and most Cursor renderers; the prior O(n²) renders the proper superscript everywhere without requiring any Markdown extensions.

📝 One-character fix
-5. **Performance** - O(n^2) loops, memory?
+5. **Performance** - O(n²) loops, memory?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agents/reviewer.md` at line 21, Replace the ASCII superscript in the
performance note by changing the plain "O(n^2)" string to use the Unicode
superscript "O(n²)"; locate the occurrence of "O(n^2)" in agents/reviewer.md
(the performance bullet or heading) and update it so the rendered text shows the
proper superscript everywhere.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@agents/scout.md`:
- Line 4: The tools list currently includes "Bash" which conflicts with the
agent rule "Never edit files. Read-only exploration."; update the tools
declaration (the tools array containing "Read", "Glob", "Grep", "Bash") so it
enforces read-only behavior—either remove "Bash" from the tools array entirely
and rely on "Read"/"Glob"/"Grep", or keep "Bash" but explicitly document and
enforce a whitelist of allowed read-only shell operations (e.g., only use
non-writing commands like cat/find/grep) in the agent rules so the intent and
capability are consistent.
- Around line 19-21: Workflow step 5 currently says "Identify what's missing,
gather more context, re-score" with no stopping condition; update Workflow step
5 to include the two-round escalation constraint from the Rules section ("If
still < 70 after 2 rounds, escalate to user") so the workflow is self-contained
— explicitly track and limit re-score attempts to 2 rounds and escalate to the
user if the score remains <70 after those attempts (refer to "Workflow step 5"
and the Rules text "If still < 70 after 2 rounds, escalate to user").

In `@rules/atomic-commits.mdc`:
- Line 14: Update the guidance that currently instructs running "git diff
--stat" to verify staged changes: change the command to "git diff --staged
--stat" (or mention the alias "git diff --cached --stat") so it inspects the
index (staged content) rather than unstaged working-tree changes; edit the
sentence in rules/atomic-commits.mdc that references git diff --stat to use the
corrected command and optionally include the alias for clarity.

In `@skills/learn-rule/SKILL.md`:
- Line 19: Replace the ambiguous "Persist to LEARNED section or project memory"
with a clear canonical target: always append approved learnings to the
repository's LEARNED.md file (root-level), not to arbitrary project memory;
update the SKILL.md instruction to state the exact flow: on user correction
acknowledge the correction, propose a short addition to LEARNED.md, and only
append that entry to LEARNED.md after explicit user approval. Ensure the wording
in SKILL.md references "LEARNED.md" and the approve-then-append behavior so
different sessions use the same canonical storage.

In `@skills/smart-commit/SKILL.md`:
- Around line 47-51: The fenced code block in the commit-message template
missing a language specifier is triggering MD040; update the commit-message
template fence in SKILL.md so the opening triple-backtick includes a language
tag (e.g., "text" or "plaintext") to mark it as a plain text block and satisfy
the linter.

---

Nitpick comments:
In @.cursor-plugin/plugin.json:
- Line 4: The "version" key in .cursor-plugin/plugin.json is set to "1.2.0"
which is unconventional for a new plugin; either change the value of the
"version" field to a conventional initial release (e.g., "0.1.0" or "1.0.0") in
plugin.json, or if "1.2.0" is intentional to align with the companion Claude
Code plugin, add an explicit note in the README or CHANGELOG explaining the
versioning convention so consumers understand this is deliberate (reference the
"version" property in plugin.json when making the change).

In `@agents/planner.md`:
- Around line 3-13: The description field and the Planner trigger text disagree
on thresholds—'description' currently lists ">5 files" while the "## Trigger"
body lists ">10 tool calls expected" but omits the files condition; update them
to match by choosing a single heuristic set and applying it to both places
(e.g., include both ">5 files" and ">10 tool calls expected" in the top-level
description string and ensure the "## Trigger" paragraph for Planner also lists
both signals), referencing the 'description' field and the "## Trigger" section
in the Planner doc so users see the same criteria everywhere.

In `@agents/reviewer.md`:
- Line 21: Replace the ASCII superscript in the performance note by changing the
plain "O(n^2)" string to use the Unicode superscript "O(n²)"; locate the
occurrence of "O(n^2)" in agents/reviewer.md (the performance bullet or heading)
and update it so the rendered text shows the proper superscript everywhere.

In `@rules/no-debug-statements.mdc`:
- Around line 10-13: The exceptions list currently mentions JS and Python
logging libraries but omits Go and Rust equivalents; update the rule text near
the exceptions clause to add common logging frameworks for Go (e.g., the
standard log package and popular libs like logrus, zap) and for Rust (e.g., the
log crate and tracing) so that uses of fmt.Println, dbg!(), and println!() for
debugging are disallowed while legitimate logging via log/ logrus/ zap (Go) and
log/ tracing (Rust) are explicitly allowed; reference the symbols fmt.Println,
dbg!, println!, and the "Exceptions" clause when making the change.

In `@skills/insights/SKILL.md`:
- Line 18: In SKILL.md the four fenced output example blocks (including the one
that begins with "Session Insights") are plain text but open with ``` (no
language) which triggers MD040; update each opening fence to ```text so all four
plaintext example blocks use the text language specifier and suppress
markdownlint warnings.

In `@skills/learn-rule/SKILL.md`:
- Around line 23-27: The markdown fenced code blocks in SKILL.md are missing
language specifiers which triggers MD040; update the two plaintext fences that
contain "[LEARN] Category: One-line rule / Mistake: ... Correction: ..." and the
block starting "Recent mistake: Edited wrong utils.ts file" to use a plaintext
language tag (e.g., ```text) so markdownlint passes and renderers don't attempt
syntax highlighting.

In `@skills/replay-learnings/SKILL.md`:
- Around line 24-42: The fenced code block containing the REPLAY BRIEFING
example is missing a language tag; update the triple-backtick opener for that
block (the block starting with "REPLAY BRIEFING: <task>" in SKILL.md) to include
the language specifier `text` so the fence reads ```text, ensuring the Markdown
linter MD040 is satisfied.

In `@skills/smart-commit/SKILL.md`:
- Around line 27-29: The quality-gate snippets currently hard-code npm commands
("npm run lint", "npm run typecheck", "npm test -- --changed
--passWithNoTests"), which will fail for non-JS projects; update SKILL.md to
mark these lines as example/npm-specific commands and either (a) add a short
comment above them stating "Example commands — adapt per language
(Python/Go/Rust/etc.)" or (b) provide conditional alternative snippets for other
ecosystems (e.g., flake8/mypy/pytest for Python, golangci-lint/go test for Go,
cargo clippy/cargo test for Rust), so readers know to replace the npm commands
for their stack.

name: scout
description: Confidence-gated exploration agent that assesses readiness before implementation
description: Confidence-gated exploration that assesses readiness before implementation. Scores 0-100 across five dimensions and gives GO/HOLD verdict.
tools: ["Read", "Glob", "Grep", "Bash"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Bash in tools contradicts the "read-only exploration" rule

Line 4 grants the agent Bash, which can execute arbitrary write operations (> file, sed -i, tee, etc.). Line 52 declares "Never edit files. Read-only exploration." The rule is a soft guardrail only; the tool capability remains unrestricted. If read-only is the hard intent, remove Bash from tools (shell-based reads like find/cat can be replaced by Glob/Read), or explicitly enumerate the permitted Bash operations in the rule.

📝 Option: remove Bash for strict read-only
-tools: ["Read", "Glob", "Grep", "Bash"]
+tools: ["Read", "Glob", "Grep"]

Also applies to: 52-52

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agents/scout.md` at line 4, The tools list currently includes "Bash" which
conflicts with the agent rule "Never edit files. Read-only exploration."; update
the tools declaration (the tools array containing "Read", "Glob", "Grep",
"Bash") so it enforces read-only behavior—either remove "Bash" from the tools
array entirely and rely on "Read"/"Glob"/"Grep", or keep "Bash" but explicitly
document and enforce a whitelist of allowed read-only shell operations (e.g.,
only use non-writing commands like cat/find/grep) in the agent rules so the
intent and capability are consistent.

Comment on lines +19 to +21
3. Score confidence (0-100)
4. If >= 70: GO with findings
5. If < 70: Identify what's missing, gather more context, re-score
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Workflow step 5 omits the 2-round escalation constraint defined in Rules

The Workflow at step 5 only says "Identify what's missing, gather more context, re-score," with no upper bound. The Rules section (line 54) adds the critical constraint: "If still < 70 after 2 rounds, escalate to user." An agent following the Workflow section alone has no stopping condition and could loop indefinitely. Embed the bound directly in the Workflow step to make the constraint self-contained.

📝 Proposed fix
-5. If < 70: Identify what's missing, gather more context, re-score
+5. If < 70: Identify what's missing, gather more context, re-score (max 2 rounds; escalate to user if still < 70)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@agents/scout.md` around lines 19 - 21, Workflow step 5 currently says
"Identify what's missing, gather more context, re-score" with no stopping
condition; update Workflow step 5 to include the two-round escalation constraint
from the Rules section ("If still < 70 after 2 rounds, escalate to user") so the
workflow is self-contained — explicitly track and limit re-score attempts to 2
rounds and escalate to the user if the score remains <70 after those attempts
(refer to "Workflow step 5" and the Rules text "If still < 70 after 2 rounds,
escalate to user").


Always work on feature branches. Never commit directly to main.

Review changes before pushing. Run `git diff --stat` to verify what's staged.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

git diff --stat shows unstaged changes, not staged ones.

The guidance says to run git diff --stat to "verify what's staged," but this command diffs the working tree against the index — it shows unstaged changes. Users following this rule would be looking at the wrong set of files before a push, which is the exact scenario the guardrail is trying to prevent.

The correct command to inspect staged content is git diff --staged --stat (alias: git diff --cached --stat).

🔧 Proposed fix
-Review changes before pushing. Run `git diff --stat` to verify what's staged.
+Review changes before pushing. Run `git diff --staged --stat` to verify what's staged.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Review changes before pushing. Run `git diff --stat` to verify what's staged.
Review changes before pushing. Run `git diff --staged --stat` to verify what's staged.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rules/atomic-commits.mdc` at line 14, Update the guidance that currently
instructs running "git diff --stat" to verify staged changes: change the command
to "git diff --staged --stat" (or mention the alias "git diff --cached --stat")
so it inspects the index (staged content) rather than unstaged working-tree
changes; edit the sentence in rules/atomic-commits.mdc that references git diff
--stat to use the corrected command and optionally include the alias for
clarity.

1. Identify the lesson — what mistake was made? What should happen instead?
2. Format the rule with full context.
3. Propose the addition and wait for user approval.
4. After approval, persist to LEARNED section or project memory.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Ambiguous persistence target may lead to inconsistent storage of learnings.

"Persist to LEARNED section or project memory" does not identify a canonical file. Based on learnings, the established convention is LEARNED.md. Without a concrete target, different AI sessions may store learnings to different backends, breaking the compounding learning model the skill is designed to support.

✏️ Proposed fix
-4. After approval, persist to LEARNED section or project memory.
+4. After approval, append to `LEARNED.md` (or the project's designated learning file).

Based on learnings: "When user corrects you, acknowledge the correction, propose an addition to LEARNED.md, and apply it after approval."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
4. After approval, persist to LEARNED section or project memory.
4. After approval, append to `LEARNED.md` (or the project's designated learning file).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/learn-rule/SKILL.md` at line 19, Replace the ambiguous "Persist to
LEARNED section or project memory" with a clear canonical target: always append
approved learnings to the repository's LEARNED.md file (root-level), not to
arbitrary project memory; update the SKILL.md instruction to state the exact
flow: on user correction acknowledge the correction, propose a short addition to
LEARNED.md, and only append that entry to LEARNED.md after explicit user
approval. Ensure the wording in SKILL.md references "LEARNED.md" and the
approve-then-append behavior so different sessions use the same canonical
storage.

Comment on lines +47 to +51
```
<type>(<scope>): <short summary>

<body - what changed and why>
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language specifier to the fenced code block.

The commit-message template fence at Line 47 has no language specified, triggering MD040. Use text or plaintext.

🔧 Proposed fix
-```
+```text
 <type>(<scope>): <short summary>

 <body - what changed and why>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```
<type>(<scope>): <short summary>
<body - what changed and why>
```
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 47-47: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/smart-commit/SKILL.md` around lines 47 - 51, The fenced code block in
the commit-message template missing a language specifier is triggering MD040;
update the commit-message template fence in SKILL.md so the opening
triple-backtick includes a language tag (e.g., "text" or "plaintext") to mark it
as a plain text block and satisfy the linter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant