[prompt-analysis] Copilot PR Prompt Analysis - 2026-01-28 #12244
Closed
Replies: 1 comment
-
|
This discussion was automatically closed because it expired on 2026-02-04T12:04:17.982Z. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
Analysis Period: Last 30 days (Dec 29, 2025 - Jan 28, 2026)
Total PRs: 1,000 | Merged: 648 (64.8%) | Closed: 342 (34.2%) | Open: 10 (1.0%)
Prompt Categories and Success Rates
Prompt Analysis
✅ Successful Prompt Patterns
Common characteristics in merged PRs:
update,add,file_reference,resolve,createfile_reference(mention specific files)code_reference(mention functions/classes)fixverb (bug fixes)Example successful prompts:
View 5 Merged PR Examples
PR #12206: Fix TypeScript error in close_expired_discussions: add duplicateCount to return type → MERGED
PR #12190: Replace redundant zero-capacity slice allocations with idiomatic zero-value declarations → MERGED
PR #12189: Add package-level documentation to core packages → MERGED
PR #12176: Increase workflow health manager timeout to 30 minutes → MERGED
PR #12174: Add automated cleanup policy for stale draft PRs → MERGED
❌ Unsuccessful Prompt Patterns
Common characteristics in closed PRs:
resolve,long_prompt,add,fix,file_referencecode_referencevs 15% in mergedExample unsuccessful prompts:
View 5 Closed PR Examples
PR #12197: [log] Add debug logging to utility and validation functions → CLOSED
PR #12191: Verify G116 Trojan Source detection is enabled (not G424) → CLOSED
PR #12177: Fix: Require explicit assignee verification in auto-assign workflow → CLOSED
PR #12157: Track campaign label permission errors in conclusion job failure reports → CLOSED
PR #12146: Fix shell escaping for environment variable expansion in AWF → CLOSED
Key Insights
Based on 1,000 PRs analyzed over 30 days, here are the most actionable findings:
🎯 Refactoring > Bug Fixing: Refactoring tasks have 90% success rate vs 62.8% for bug fixes. The highest success comes from well-defined code improvements, not firefighting.
📏 Length Sweet Spot: Both short (<50 words) and long (>100 words) prompts work - but extremely short prompts (<20 words) and extremely long prompts (>200 words) correlate with closures. The issue isn't length per se, but clarity.
📁 File References Matter Less Than Expected: While 38% of merged PRs mention files, 37% of closed PRs also do. Simply mentioning files doesn't guarantee success - context matters more.
🐛 Bug Fixes Are Hard: Bug fixes have the lowest success rate (62.8%) despite being the most common task type (709 PRs). This suggests bug fixes need better scoping or are inherently riskier.
🔍 "Resolve" Is a Red Flag: The word "resolve" appears in 58% of closed PRs vs 37% of merged ones. Vague problem-solving tasks without clear solutions tend to fail.
Recommendations
Based on the data analysis, follow these best practices for higher PR success rates:
✅ DO:
Choose refactoring over reactive fixes (90% success)
Be specific about the desired outcome
Include concrete acceptance criteria
Focus on one clear change
❌ AVOID:
Vague "resolve" or "investigate" prompts
Extremely short prompts without context
Bug fixes without reproduction steps
Rambling context dumps
Historical Trends
Tracking success rates over the last 7 reports:
Trend: Success rates remain stable at 63-65% over the past week. Refactoring consistently shows highest success rates (80-100%), while bug fixes dominate by volume but have lower success rates (62-65%).
Methodology Notes
Prompt Extraction: Prompts extracted from PR bodies using the "Original prompt" section marker. For PRs without this marker, the first 500 characters of the body are used.
Categorization: Automated keyword-based categorization:
bug_fix: Contains "fix", "resolve", "correct", "bug", "issue", "error"feature: Contains "add", "implement", "create", "new feature"refactor: Contains "refactor", "improve", "optimize", "restructure"documentation: Contains "document", "docs", "readme", "comment"test: Contains "test", "coverage", "spec"other: Does not match above patternsSuccess Rate: Calculated as
merged / (merged + closed)for completed PRs. Open PRs (10) excluded from success rate calculations.Analysis generated from 1,000 Copilot PRs (Dec 29, 2025 - Jan 28, 2026)
Workflow Run: §21437296251
Beta Was this translation helpful? Give feedback.
All reactions