-
Notifications
You must be signed in to change notification settings - Fork 264
Description
Executive Summary
As a Claude Code user who does not use GitHub Copilot or Copilot CLI, I completed a comprehensive review of the gh-aw documentation to assess whether Claude users can successfully adopt this tool.
Key Finding: Yes, Claude Code users can successfully adopt gh-aw, though with some friction during onboarding. The documentation is fundamentally multi-engine friendly with excellent authentication documentation and engine-agnostic tools. However, the Quick Start guide and example distribution skew toward Copilot, which may create initial confusion for non-Copilot users.
Overall Assessment Score: 7.5/10
Persona Context
I reviewed this documentation as a developer who:
- ✅ Uses GitHub for version control
- ✅ Uses Claude Code as primary AI assistant
- ❌ Does NOT use GitHub Copilot
- ❌ Does NOT use Copilot CLI
- ❌ Does NOT have Copilot subscription
Question 1: Onboarding Experience
Can a Claude Code user understand and get started with gh-aw?
Answer: Yes, but with initial confusion that gets resolved quickly.
The onboarding experience starts with a Copilot-first orientation but provides clear paths for Claude users. The gh aw add-wizard command specifically prompts for engine selection, which is excellent for discoverability.
Positive Findings:
- Prerequisites section (
docs/src/content/docs/setup/quick-start.mdx:20-26) explicitly lists all three engines as equals: "GitHub Copilot, Anthropic Claude or OpenAI Codex" - The
add-wizardinteractive process includes "Select an AI Engine - Choose between Copilot, Claude, or Codex" - Authentication setup is clearly documented for each engine in separate sections
- Creating Workflows guide (
docs/src/content/docs/setup/creating-workflows.mdx) is completely engine-agnostic
Issues Found:
- Quick Start Title Focus - The guide is titled "Adding an Automated Daily Status Workflow" but doesn't immediately clarify engine choice is flexible
- Step 2 Language - Step 2 says "Select an AI Engine" but only after you've already committed to running the wizard
- Example Workflow Default - The daily-repo-status workflow example doesn't show what engine it uses by default
Specific Issues Found:
- Issue 1:
docs/src/content/docs/setup/quick-start.mdx:35- Prerequisites list mentions Copilot first, which unconsciously signals it as the "default" choice - Issue 2:
docs/src/content/docs/setup/quick-start.mdx:51-56- The wizard steps are described but engine selection is item Add workflow: githubnext/agentics/weekly-research #2, meaning users might assume Copilot is required before reaching that step - Issue 3:
README.md:32- Quick Start link goes directly to guide without mentioning multi-engine support upfront
Recommended Fixes:
- Add a prominent callout at the start of Quick Start: "💡 Works with your preferred AI tool: Choose GitHub Copilot, Claude by Anthropic, or OpenAI Codex during setup"
- Reorder or emphasize engine selection earlier in the wizard description
- Add a "Which engine should I choose?" comparison table in Quick Start or link to it
- Include engine badges (Copilot | Claude | Codex) in workflow examples showing which engine they use
Question 2: Inaccessible Features for Non-Copilot Users
What features or steps don't work without Copilot?
Answer: All core features work without Copilot. No true blockers identified.
Features That Require Copilot:
- Custom Agents (
docs/src/content/docs/reference/engines.md:36-49) - Theagent:field with.github/agents/*.agent.mdfiles is Copilot-specific- Impact: Limited - this is an advanced customization feature, not core functionality
- Alternative: Claude users can customize prompts in workflow markdown directly
- Web Search Tool (
docs/src/content/docs/reference/tools.md:39-41) - Documentation notes "Some engines require third-party MCP servers for web search"- Impact: Moderate - web-search may require additional setup for non-Copilot engines
- Alternative: Use web-fetch tool or configure MCP servers
Features That Work Without Copilot (Engine-Agnostic):
- ✅ All workflow triggers (schedule, issues, PRs, workflow_dispatch)
- ✅ All MCP tools (agentic-workflows, github, playwright, cache-memory, repo-memory)
- ✅ All safe outputs (create-issue, create-discussion, create-pull-request, etc.)
- ✅ Bash tool with configurable allowlist
- ✅ Edit tool for file operations
- ✅ Network permissions and firewall controls
- ✅ Imports and workflow composition
- ✅ Safe inputs for custom inline tools
- ✅ All security features (sandboxing, lockdown mode, safe outputs)
Missing Documentation:
- No explicit statement that "All features work with all engines unless specifically noted"
- Web search tool capabilities by engine not clearly documented
- Custom agent equivalent for Claude not documented (is it needed? can markdown suffice?)
Question 3: Documentation Gaps and Assumptions
Where does the documentation assume Copilot usage?
Answer: Minimal assumptions found. Documentation is impressively multi-engine.
Copilot-Centric Language Found In:
-
File:
docs/src/content/docs/reference/engines.md:11- "GitHub Copilot CLI is the default AI engine (coding agent)"- Issue: Positions Copilot as the default without explaining why or if it matters
- Fix: Add context: "Copilot is listed first but all engines have equal functionality"
-
File:
docs/src/content/docs/setup/quick-start.mdx:35- Prerequisites list Copilot first- Issue: Subtle ordering bias
- Fix: Alphabetize or randomize order, or add "Choose one:" prefix
-
File:
.github/workflows/*.md- 73 Copilot workflows vs 29 Claude vs 9 Codex- Issue: Copilot examples dominate by 2.5:1 ratio
- Fix: Create 5-10 high-quality Claude example workflows for common use cases
Missing Alternative Instructions:
-
Custom Agent Alternative for Claude - Copilot has
.github/agents/*.agent.mdfor custom instructions, but no equivalent documented for Claude- Is this needed? Can Claude users just customize markdown prompts?
- Documentation should clarify if this is Copilot-specific or if Claude has alternatives
-
Web Search Setup for Claude - Documentation mentions "some engines require third-party MCP servers" but doesn't link to specific guides
- Fix: Add explicit Claude web-search setup guide or link to MCP configuration
-
Engine Comparison Table - No side-by-side comparison of capabilities
- Fix: Create a comparison table showing features/capabilities across engines
Positive Findings:
- Token/secrets documentation (
docs/src/content/docs/reference/tokens.mdx) is exemplary - each engine has its own section with equal detail - Engines documentation (
docs/src/content/docs/reference/engines.md) provides clear setup instructions for each engine - Tools documentation is completely engine-agnostic
- Architecture and security documentation never assumes specific engine
Severity-Categorized Findings
🚫 Critical Blockers (Score: 0/10)
None identified. Claude Code users can successfully install, configure, and run workflows.
⚠️ Major Obstacles (Score: 3/10)
Obstacle 1: Copilot-First Quick Start Orientation
Impact: Moderate friction in initial getting-started experience
Current State: Quick Start guide lists Copilot first in prerequisites and doesn't prominently advertise multi-engine support until Step 2 of the wizard
Why It's Problematic: Claude users may assume Copilot is required and abandon adoption before discovering engine choice is available
Suggested Fix:
- Add prominent callout at top of Quick Start: "💡 Choose Your AI Engine: Works with GitHub Copilot, Claude, or Codex"
- Reorder prerequisites to say "Choose one: Claude by Anthropic, GitHub Copilot, or OpenAI Codex"
- Add engine selection badges/icons to make it visually obvious
Affected Files:
docs/src/content/docs/setup/quick-start.mdx:20-26docs/src/content/docs/setup/quick-start.mdx:51-56README.md:32
Obstacle 2: Example Workflow Distribution Skews Copilot
Impact: Significant - Claude users may question if they're "second class citizens"
Current State: Repository contains 73 Copilot workflows, 29 Claude workflows, 9 Codex workflows (2.5:1 Copilot dominance)
Why It's Problematic:
- Creates perception that Claude is less supported
- Harder to find Claude-specific examples for learning
- May lead users to question feature parity
Suggested Fix:
- Create 10-15 high-quality Claude example workflows for common patterns:
- Issue triage with Claude
- PR review with Claude
- Documentation generation with Claude
- Code quality analysis with Claude
- Research and summarization with Claude
- Create a "Claude Examples" section in documentation
- Tag workflows in repository with engine badges for filtering
- Add
engine:to filename convention (e.g.,issue-triage-claude.md)
Affected Files:
.github/workflows/*.md(73 copilot, 29 claude, 9 codex)- Documentation examples throughout
Obstacle 3: Web Search Tool Setup Unclear for Non-Copilot
Impact: Moderate - Users wanting web search may get stuck
Current State: Tools documentation says "Some engines require third-party MCP servers for web search" without specifics
Why It's Problematic:
- Unclear if Claude supports web-search out of the box
- No link to setup instructions for MCP-based web search
- Users don't know if they need additional configuration
Suggested Fix:
- Add explicit table in tools.md:
| Engine | Web-Search Support | Setup Required | |---------|-------------------|----------------| | Copilot | Native | None | | Claude | Via MCP Server | [Setup Guide](#) | | Codex | Via MCP Server | [Setup Guide](#) |
- Create dedicated guide: "Setting Up Web Search with Claude"
- Link from tools.md to the setup guide
Affected Files:
docs/src/content/docs/reference/tools.md:39-41- Missing: Web search setup guide for Claude
💡 Minor Confusion Points (Score: 5/10)
- Issue 1: No explicit "all features work with all engines" statement - File:
docs/src/content/docs/introduction/how-they-work.mdx - Issue 2: Custom agent feature documented without noting it's Copilot-specific - File:
docs/src/content/docs/reference/engines.md:36-49 - Issue 3: README.md links to Quick Start without mentioning multi-engine support - File:
README.md:32 - Issue 4: "Default engine" language without explaining why Copilot is default - File:
docs/src/content/docs/reference/engines.md:11 - Issue 5: No engine comparison table for capabilities - Missing from documentation
Engine Comparison Analysis
Available Engines
Based on my review, gh-aw supports these engines:
engine: copilot- Well documented, positioned as "default", extensive examples, native web-search, custom agent supportengine: claude- Well documented, clear auth setup, good tool support, 29 example workflows, may need MCP for web-searchengine: codex- Documented, clear auth setup, limited examples (9 workflows), may need MCP for web-search
Documentation Quality by Engine
| Engine | Setup Docs | Examples | Auth Docs | Overall Score |
|---|---|---|---|---|
| Copilot | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| Claude | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Codex | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Custom | ⭐⭐⭐ | ⭐ | N/A | ⭐⭐ |
Observations:
- Setup documentation is excellent for all engines - each has dedicated section with clear steps
- Authentication documentation is exemplary - tokens.mdx provides comprehensive coverage
- Examples favor Copilot heavily - but enough Claude examples exist to get started
- Custom engine support exists but is minimally documented - advanced users only
Tool Availability Analysis
Tools Review
Analyzed tool compatibility across engines:
Engine-Agnostic Tools (Confirmed):
- ✅
edit:- File editing in workspace - ✅
bash:- Shell command execution with allowlist - ✅
web-fetch:- Web content fetching - ✅
github:- All GitHub API operations via MCP - ✅
playwright:- Browser automation - ✅
agentic-workflows:- Workflow introspection and debugging - ✅
cache-memory:- Persistent storage across runs - ✅
repo-memory:- Repository-specific memory - ✅
mcp-servers:- Custom MCP server integration - ✅ All safe-outputs - create-issue, create-discussion, create-pull-request, etc.
Engine-Specific Tools:
⚠️ web-search:- May require MCP configuration for non-Copilot engines (documentation unclear)
Unclear/Undocumented:
- ❓ Custom agent capability equivalence across engines
- ❓ Model selection flexibility (can Claude users choose claude-opus vs claude-sonnet?)
Excellent Finding: The tools system is fundamentally engine-agnostic via MCP, which is a major strength for multi-engine support.
Authentication Requirements
Current Documentation
Quick Start guide covers authentication for:
- ✅ Copilot (detailed instructions with video)
- ✅ Claude (clear instructions, links to Anthropic platform)
- ✅ Codex (clear instructions, links to OpenAI API)
Token/Secret Documentation Quality: ⭐⭐⭐⭐⭐ (Excellent)
The docs/src/content/docs/reference/tokens.mdx file is exemplary:
- Dedicated section for each engine
- Video tutorials for PAT creation
- Clear scope/permission requirements
- Fallback token chains documented
- Organization vs user-owned scenarios covered
Secret Names
| Engine | Secret Name | Documentation | CLI Command |
|---|---|---|---|
| Copilot | COPILOT_GITHUB_TOKEN |
Comprehensive with video | gh aw secrets set COPILOT_GITHUB_TOKEN |
| Claude | ANTHROPIC_API_KEY |
Clear with platform links | gh aw secrets set ANTHROPIC_API_KEY |
| Codex | OPENAI_API_KEY |
Clear with platform links | gh aw secrets set OPENAI_API_KEY |
Additional Tokens (All Engine-Agnostic):
GH_AW_GITHUB_TOKEN- Cross-repo accessGH_AW_GITHUB_MCP_SERVER_TOKEN- MCP server permissionsGH_AW_PROJECT_GITHUB_TOKEN- GitHub Projects operationsGH_AW_AGENT_TOKEN- Copilot agent assignment (Copilot-specific feature)
Missing for Claude Users
None identified. Authentication documentation is comprehensive and equal across engines.
Example Workflow Analysis
Workflow Count by Engine
Engine: copilot - 73 workflows found (65.8%)
Engine: claude - 29 workflows found (26.1%)
Engine: codex - 9 workflows found (8.1%)
Total: 111 workflows
Quality of Examples
Copilot Examples:
- Extensive coverage of all use cases
- Daily operations, PR reviews, issue triage, code analysis
- Advanced patterns like multi-repo operations
- Security scanning and auditing
- Well-commented and documented
Claude Examples:
- Good coverage of core use cases
- Daily audits, blog monitoring, documentation reviews
- This very workflow is a Claude example (eating our own dog food!)
- Smoke tests and health checks
- Quality is high for examples that exist
Gap Analysis:
The 2.5:1 ratio (Copilot:Claude) creates a perception issue. While 29 Claude examples are sufficient for learning, the imbalance suggests:
- Copilot is the "primary" engine
- Claude is "also supported"
- Users might question feature parity
Recommendation: Aim for 1:1 ratio or at least 1.5:1. Create 15-20 more Claude examples covering:
- Issue triage patterns
- PR review workflows
- Documentation generation
- Code quality analysis
- Research and summarization tasks
- Multi-repo coordination
- Security scanning
Recommended Actions
Priority 1: Critical Documentation Fixes
None required - no critical blockers identified.
Priority 2: Major Improvements
-
Add Multi-Engine Callout to Quick Start - Add prominent "Choose Your AI Engine" callout at top of Quick Start guide - File:
docs/src/content/docs/setup/quick-start.mdx:1-30 -
Create Engine Comparison Table - Add side-by-side engine comparison showing capabilities, costs, strengths - File: New file
docs/src/content/docs/reference/engine-comparison.md -
Expand Claude Example Workflows - Create 15-20 additional Claude workflows for common patterns - File:
.github/workflows/*.md -
Document Web Search Setup by Engine - Create explicit guide for web-search with MCP servers for Claude/Codex - File: New guide
docs/src/content/docs/guides/web-search-claude.md -
Clarify Custom Agent Alternatives - Document whether Claude users need custom agent equivalent or if markdown customization suffices - File:
docs/src/content/docs/reference/engines.md
Priority 3: Nice-to-Have Enhancements
- Add Engine Badges to Workflow Examples - Visual indicators showing which engine each example uses
- Create "Claude User Quick Start" - Dedicated quick start path for Claude-first users
- Add Engine Filter to Documentation - Tag and filter examples by engine
- Model Selection Documentation - Document how to choose claude-opus vs claude-sonnet
- Engine-Specific Optimization Tips - Best practices for each engine
Positive Findings
What Works Well
Claude Code users will appreciate these aspects:
- ✅ Excellent Authentication Documentation - tokens.mdx is comprehensive and equal across engines
- ✅ Engine-Agnostic Tool System - MCP-based tools work with all engines seamlessly
- ✅ Interactive Engine Selection -
gh aw add-wizardprompts for engine choice - ✅ Clear Engine Configuration - Simple
engine: claudein frontmatter - ✅ Equal Feature Access - All core features work with all engines
- ✅ Good Security Documentation - No engine-specific security concerns
- ✅ Quality Claude Examples Exist - 29 workflows provide solid learning foundation
- ✅ No Copilot Lock-In - Easy to switch engines or run mixed environment
- ✅ CLI Tool Support -
gh aw compile,gh aw run,gh aw logsall engine-agnostic - ✅ This Very Workflow - Demonstrates gh-aw eating its own dog food with Claude
Conclusion
Can Claude Code Users Successfully Adopt gh-aw?
Answer: Yes, with moderate initial friction that quickly resolves.
Reasoning:
The gh-aw project is fundamentally well-designed for multi-engine support. Claude users can successfully install, configure, and run workflows without any true blockers. The authentication system is exemplary, tools are engine-agnostic, and 29 example workflows provide sufficient learning material.
However, three areas create friction:
- Quick Start orientation subtly positions Copilot first
- Example distribution (73 vs 29 vs 9) creates perception of unequal support
- Web search capabilities need clearer documentation for non-Copilot engines
None of these are blockers - they're UX friction points that can be resolved with documentation improvements. A Claude user who perseveres through the Quick Start will discover gh-aw works excellently with Claude.
The project deserves credit for:
- Designing tools as engine-agnostic MCP servers
- Providing equal authentication documentation
- Including engine selection in the wizard
- Having 29 high-quality Claude examples
- This workflow itself using Claude (dogfooding)
Overall Assessment Score: 7.5/10
Breakdown:
- Clarity for non-Copilot users: 7/10 (good but Copilot-first orientation)
- Claude engine documentation: 8/10 (excellent setup, good examples)
- Alternative approaches provided: 9/10 (tools are engine-agnostic)
- Engine parity: 8/10 (functional parity, perception gap from examples)
Why 7.5/10:
- Loses 0.5 points for Copilot-first Quick Start orientation
- Loses 1.0 point for example distribution imbalance (73 vs 29)
- Loses 0.5 points for web-search setup clarity
- Loses 0.5 points for missing engine comparison documentation
Next Steps
For Documentation Maintainers:
-
Immediate (Low effort, high impact):
- Add "Choose Your AI Engine" callout to Quick Start
- Reorder prerequisites to show engines as equals
- Add engine badges to README and Quick Start
-
Short-term (Medium effort, high impact):
- Create engine comparison table
- Document web-search setup for Claude/Codex
- Create 5-10 high-priority Claude example workflows
-
Long-term (High effort, sustained impact):
- Achieve 1:1 or 1.5:1 Copilot:Claude example ratio
- Create Claude-specific quick start path
- Add engine filtering to documentation site
For Claude Code Users Reading This:
Don't be discouraged by the Copilot-first orientation. gh-aw works excellently with Claude:
- Set
engine: claudein your workflow frontmatter - Add
ANTHROPIC_API_KEYto repository secrets - All tools and features work identically
- 29 example workflows show Claude patterns
- This very review workflow uses Claude!
Appendix: Files Reviewed
Complete List of Documentation Files Analyzed
Core Documentation:
README.mddocs/src/content/docs/setup/quick-start.mdxdocs/src/content/docs/setup/creating-workflows.mdxdocs/src/content/docs/setup/cli.mddocs/src/content/docs/introduction/how-they-work.mdxdocs/src/content/docs/introduction/architecture.mdx(partial - 34KB file)
Reference Documentation:
docs/src/content/docs/reference/tools.mddocs/src/content/docs/reference/engines.mddocs/src/content/docs/reference/tokens.mdxdocs/src/content/docs/reference/faq.md
Additional Files:
create.md(workflow creation instructions).github/workflows/*.md(111 workflow examples analyzed by grep)
Analysis Methods:
- Direct file reading via bash cat
- Pattern matching with grep for engine distribution
- Manual review of authentication flows
- Tool configuration analysis
Report Generated: 2026-02-13
Workflow Run ID: 21995786541
Workflow: claude-code-user-docs-review
Engine Used: claude (eating our own dog food! 🐕)
Review Methodology: Systematic reading of core docs → Critical question analysis → Severity categorization → Actionable recommendations
Note: This was intended to be a discussion, but discussions could not be created due to permissions issues. This issue was created as a fallback.
AI generated by Claude Code User Documentation Review
- expires on Feb 20, 2026, 5:17 PM UTC