Skip to content

A self-learning system for Claude Code that captures corrections, positive feedback, and preferences — then syncs them to CLAUDE.md and AGENTS.md.

License

Notifications You must be signed in to change notification settings

liner-engineering/claude-reflect

 
 

Repository files navigation

claude-reflect

GitHub stars Version License: MIT Tests Platform

A self-learning system for Claude Code that captures corrections, positive feedback, and preferences — then syncs them to CLAUDE.md and AGENTS.md.

What it does

When you correct Claude Code during a session ("no, use gpt-5.1 not gpt-5", "use database for caching"), these corrections are captured and can be added to your CLAUDE.md files so Claude remembers them in future sessions.

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│  You correct    │ ──► │  Hook captures  │ ──► │  /reflect adds  │
│  Claude Code    │     │  to queue       │     │  to CLAUDE.md   │
└─────────────────┘     └─────────────────┘     └─────────────────┘
      (automatic)            (automatic)            (manual review)

Installation

# Add the marketplace
claude plugin marketplace add bayramannakov/claude-reflect

# Install the plugin
claude plugin install claude-reflect@claude-reflect-marketplace

# IMPORTANT: Restart Claude Code to activate the plugin

After installation, restart Claude Code (exit and reopen). Then hooks auto-configure and commands are ready.

First run? When you run /reflect for the first time, you'll be prompted to scan your past sessions for learnings.

Prerequisites

  • Claude Code CLI installed
  • Python 3.6+ (included on most systems)

Platform Support

  • macOS: Fully supported
  • Linux: Fully supported
  • Windows: Fully supported (native Python, no WSL required)

Commands

Command Description
/reflect Process queued learnings with human review
/reflect --scan-history Scan ALL past sessions for missed learnings
/reflect --dry-run Preview changes without applying
/reflect --targets Show detected config files (CLAUDE.md, AGENTS.md)
/reflect --review Show queue with confidence scores and decay status
/reflect --dedupe Find and consolidate similar entries in CLAUDE.md
/skip-reflect Discard all queued learnings
/view-queue View pending learnings without processing

How It Works

claude-reflect in action

Two-Stage Process

Stage 1: Capture (Automatic)

Hooks run automatically to detect and queue corrections:

Hook Trigger Purpose
capture_learning.py Every prompt Detects correction patterns and queues them
check_learnings.py Before compaction Backs up queue and informs user
post_commit_reminder.py After git commit Reminds to run /reflect after completing work

Stage 2: Process (Manual)

Run /reflect to review and apply queued learnings to CLAUDE.md.

Detection Methods

Claude-reflect uses a hybrid detection approach:

1. Regex patterns (real-time capture)

Fast pattern matching during sessions detects:

  • Corrections: "no, use X" / "don't use Y" / "actually..." / "that's wrong"
  • Positive feedback: "Perfect!" / "Exactly right" / "Great approach"
  • Explicit markers: "remember:" — highest confidence

2. Semantic AI validation (during /reflect)

When you run /reflect, an AI-powered semantic filter:

  • Multi-language support — understands corrections in any language
  • Better accuracy — filters out false positives from regex
  • Cleaner learnings — extracts concise, actionable statements

Example: A Spanish correction like "no, usa Python" is correctly detected even though it doesn't match English patterns.

Each captured learning has a confidence score (0.60-0.95). The final score is the higher of regex and semantic confidence.

Human Review

When you run /reflect, Claude presents a summary table with options:

  • Apply - Accept the learning and add to CLAUDE.md
  • Edit before applying - Modify the learning text first
  • Skip - Don't apply this learning

Multi-Target Sync

Approved learnings are synced to:

  • ~/.claude/CLAUDE.md (global - applies to all projects)
  • ./CLAUDE.md (project-specific)
  • AGENTS.md (if exists - works with Codex, Cursor, Aider, Jules, Zed, Factory)

Run /reflect --targets to see which files will be updated.

Uninstall

claude plugin uninstall claude-reflect@claude-reflect-marketplace

File Structure

claude-reflect/
├── .claude-plugin/
│   └── plugin.json         # Plugin manifest (auto-registers hooks)
├── commands/
│   ├── reflect.md          # Main command
│   ├── skip-reflect.md     # Discard queue
│   └── view-queue.md       # View queue
├── hooks/
│   └── hooks.json          # Auto-configured when plugin installed
├── scripts/
│   ├── lib/
│   │   ├── reflect_utils.py      # Shared utilities
│   │   └── semantic_detector.py  # AI-powered semantic analysis
│   ├── capture_learning.py       # Hook: detect corrections
│   ├── check_learnings.py        # Hook: pre-compact check
│   ├── post_commit_reminder.py   # Hook: post-commit reminder
│   ├── compare_detection.py      # Compare regex vs semantic detection
│   ├── extract_session_learnings.py
│   ├── extract_tool_rejections.py
│   └── legacy/                   # Bash scripts (deprecated)
├── tests/                  # Test suite
└── SKILL.md                # Skill context for Claude

Features

Historical Scan

First time using claude-reflect? Run:

/reflect --scan-history

This scans all your past sessions for corrections you made, so you don't lose learnings from before installation.

Smart Filtering

Claude filters out:

  • Questions (not corrections)
  • One-time task instructions
  • Context-specific requests
  • Vague/non-actionable feedback

Only reusable learnings are kept.

Duplicate Detection

Before adding a learning, existing CLAUDE.md content is checked. If similar content exists, you can:

  • Merge with existing entry
  • Replace the old entry
  • Skip the duplicate

Semantic Deduplication

Over time, CLAUDE.md can accumulate similar entries. Run /reflect --dedupe to:

  • Find semantically similar entries (even with different wording)
  • Propose consolidated versions
  • Clean up redundant learnings

Example:

Before:
  - Use gpt-5.1 for complex tasks
  - Prefer gpt-5.1 for reasoning
  - gpt-5.1 is better for hard problems

After:
  - Use gpt-5.1 for complex reasoning tasks

Tips

  1. Use explicit markers for important learnings:

    remember: always use venv for Python projects
    
  2. Run /reflect after git commits - The hook reminds you, but make it a habit

  3. Historical scan on new machines - When setting up a new dev environment:

    /reflect --scan-history --days 90
    
  4. Project vs Global - Model names and general patterns go global; project-specific conventions stay in project CLAUDE.md

Contributing

Pull requests welcome! Please read the contributing guidelines first.

License

MIT

About

A self-learning system for Claude Code that captures corrections, positive feedback, and preferences — then syncs them to CLAUDE.md and AGENTS.md.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.9%
  • Shell 7.1%