An Agent Skill for closing out coding sessions with knowledge transfer. Compatible with Claude Code, GitHub Copilot, Cursor, Windsurf, and other agents.
Reviews what changed during a session, scans the conversation for correction moments, updates project documentation and agent instructions, persists lessons learned, and surfaces useful follow-up tasks or automation ideas — so future sessions start with better context and fewer repeated mistakes. It is also a good fit before handoff or whenever the user wants to preserve decisions, gotchas, recurring mistakes, or next-step context from the work.
Works with any agent instruction convention: AGENTS.md, CLAUDE.md, copilot custom
instructions (e.g., .github/copilot-instructions.md, .github/agents/*.md), .cursorrules, or whatever your project uses.
Install:
npx skills add xpepper/session-wrap-upTrigger phrases:
- "let's wrap up"
- "end of session"
- "help me hand this off"
- "save what we learned"
- "capture lessons"
- "document the gotchas from today"
- "update the docs before we close"
What it does:
- Gathers context from recent git history and uncommitted changes
- Scans the conversation for moments where the user had to correct or redirect the agent — these are the highest-value lessons
- Identifies agent instruction files in the project and reads them to understand current content and style
- Scales the wrap-up depth to the size of the session instead of forcing a heavy process every time
- Optionally fans out larger wrap-ups into separate doc, lesson, follow-up, and automation analysis tracks
- Validates proposed captures against existing docs before adding new instructions
- Asks the user to confirm proposed captures and surface any recurring issues from past sessions
- Updates project docs and agent instructions with accurate, actionable entries matching the project's existing style
- Saves personal/workflow lessons to
docs/agent/memories/lessons-learned.mdif they don't fit in agent instruction files
- Git
For routine skill changes, keep the validation pass small:
- Run the Anthropic
skill-creatorquick_validate.pycheck. - Run the minimal behavioral eval set in
evals/evals.json.
The default eval set protects three behaviors:
- meaningful wrap-ups still update the right docs/instructions
- already-documented guidance is deduplicated instead of being re-added
- trivial sessions stay no-op
For larger workflow changes, run the deeper Anthropic skill-creator comparison flow as well:
compare the current skill against the previous committed version and inspect the generated review
artifact. That deeper benchmark is useful for behavior changes, but it is intentionally not the
default gate for every small text edit.
MIT