You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Context is what makes agents effective. Added guidance for:
- Agent launching: Provide task, implementation, project context and
specific focus area. Tailor to agent type (debuggers need error
details, reviewers need change rationale, implementers need
constraints).
- Phase continuity: Maintain context throughout workflow. Carry
forward user clarifications, implementation decisions, and
constraint discoveries. Don't re-decide or re-ask.
- Bot feedback evaluation: You have context bots lack (project
standards, implementation rationale, trade-offs). Evaluate
feedback against this context before accepting.
- PR description: Provide reviewers with decision context (why
this approach, trade-offs made, how it fits the system).
- Error recovery: Capture decision-enabling context (what was
attempted, state before failure, root cause indicators).
Without context, agents guess. With it, they make informed
decisions aligned with project goals.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Create your execution plan, then implement the solution. Use /load-cursor-rules to load relevant project standards for the task. Execute agents in parallel when possible, sequentially when they depend on each other.
42
+
43
+
When launching agents, provide targeted context for effectiveness: task context (original requirements and any clarifications), implementation context (what's been built, decisions made, constraints), project context (relevant standards from /load-cursor-rules), and specific focus area. Tailor context to agent type - debuggers need error details and reproduction steps, reviewers need change rationale and risk areas, implementers need full requirements and constraints.
44
+
45
+
Maintain context throughout workflow phases. Decisions and clarifications from earlier phases inform later ones - don't re-decide or re-ask. Carry forward user clarifications, implementation decisions, constraint discoveries, and why choices were made.
42
46
</autonomous-execution>
43
47
44
48
<validation-and-review>
45
49
Ensure code quality through adaptive validation that scales with complexity and risk. Match review intensity to the changes: simple changes need only automated checks, medium complexity benefits from targeted agent review, high-risk or security-sensitive changes warrant comprehensive review. Use your judgment to determine what level of validation the changes require.
46
50
</validation-and-review>
47
51
48
52
<create-pr>
49
-
Deliver a well-documented pull request ready for review, with commits following @rules/git-commit-message.mdc and a clear description of changes, impact, and testing approach.
53
+
Deliver a well-documented pull request ready for review, with commits following .cursor/rules/git-commit-message.mdc. Provide reviewers with decision context: why this approach over alternatives, what trade-offs were made, how this fits the larger system, and what testing validates the changes.
50
54
</create-pr>
51
55
52
56
<bot-feedback-loop>
53
-
Autonomously address valuable bot feedback, reject what's not applicable, and deliver a PR ready for human review with all critical issues resolved. Give bots time to analyze, then review their feedback critically. Fix what's valuable (security issues, real bugs, good suggestions). Reject what's not (use WONTFIX with brief explanation for context-missing or incorrect feedback). You are the ultimate decider - trust your judgment on what matters. Iterate as needed until critical issues are resolved.
57
+
Autonomously address valuable bot feedback, reject what's not applicable, and deliver a PR ready for human review with all critical issues resolved. Give bots time to analyze, then review their feedback critically. You have context bots lack: project standards, why implementation choices were made, trade-offs considered, and user requirements. Evaluate feedback against this context - bots may suggest changes that contradict project patterns or misunderstand requirements. Fix what's valuable (security issues, real bugs, good suggestions). Reject what's not (use WONTFIX with brief explanation for context-missing or incorrect feedback). You are the ultimate decider - trust your judgment on what matters. Iterate as needed until critical issues are resolved.
54
58
</bot-feedback-loop>
55
59
56
60
<completion>
57
61
Provide a summary of what was accomplished, highlights you're proud of, and any significant issues found and fixed during bot review. Scale the summary length to the complexity of the change - simple fixes get a sentence or two, major features deserve a paragraph. Include the PR URL and worktree location.
58
62
</completion>
59
63
60
64
<error-handling>
61
-
Recover gracefully from failures when possible, or inform the user clearly when manual intervention is needed. Capture error context and assess whether automatic recovery is feasible. Attempt fixes when you can (like auto-fixing validation errors). For issues you can't resolve autonomously, inform the user with clear options and context.
65
+
Recover gracefully from failures when possible, or inform the user clearly when manual intervention is needed. Capture decision-enabling context: what was being attempted, what state preceded the failure, what the error indicates about root cause, and whether you have enough information to fix it autonomously. Attempt fixes when you can (like auto-fixing validation errors). For issues you can't resolve autonomously, inform the user with clear options and context.
0 commit comments