Skip to content

Commit 3c93757

Browse files
Nick Sullivanclaude
andcommitted
📚 Add executing model superiority principle to prompt engineering
Key additions: - "Assume the executing model is smarter" as first principle - Acknowledge that prompts written by GPT-4 may be executed by Claude 3.5 or newer models - Emphasize not preventing superior models from using their capabilities - Reinforce trust in executing model's abilities over prescriptive details This principle fundamentally changes how we write prompts - focusing on goals rather than implementation, knowing the executor is likely more capable than the creator. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
1 parent e6d1bcd commit 3c93757

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

.cursor/rules/prompt-engineering.mdc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ you create for LLM consumption.
2929

3030
## Key Principles for LLM-Readable Prompts
3131

32+
- Assume the executing model is smarter: The model executing your prompt is likely more capable than the model that created it. Trust its abilities rather than over-prescribing implementation details.
3233
- Front-load critical information: LLMs give more weight to early content
3334
- Be explicit: LLMs can't infer context the way humans do
3435
- Maintain consistency: Use the same terminology throughout
@@ -142,11 +143,14 @@ what you don't want, even as a counterexample.
142143
When writing prompts for LLM execution (commands, workflows, agents), focus on clear
143144
outcomes rather than micro-managing steps. LLMs can figure out implementation details.
144145

146+
Remember: The model executing your prompt is likely more advanced than the model that created it. A prompt written by GPT-4 might be executed by Claude 3.5 Sonnet or GPT-4o. Even prompts written by older versions of the same model will be executed by newer, smarter versions. Trust the executing model's superior capabilities.
147+
145148
### The Over-Prescription Problem in LLM-to-LLM Communication
146149

147150
Overly prescriptive prompts create problems when LLMs execute them:
148151

149152
- Waste tokens on process details the executing LLM can determine
153+
- Prevent the executing model from using its superior capabilities
150154
- Create brittle workflows that break with slight context changes
151155
- Force unnecessary decision trees that add complexity
152156
- Reduce the LLM's ability to handle edge cases intelligently

0 commit comments

Comments
 (0)