Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Jan 28, 2026

Claude and Codex engines labeled "experimental" despite production evidence (29 Claude workflows, 9 Codex workflows in active use). Label created unnecessary perception of instability and discouraged adoption.

Changes

  • Banner replacement (lines 10-12): Removed "Experimental Engines" warning; replaced with neutral multi-engine support note stating all engines support same tools/triggers/safe-outputs
  • Claude section (line 61): Removed "experimental" designation from description

Before/After

Before:

> [!NOTE]
> Experimental Engines
> Claude and Codex engines are available but marked as experimental...

After:

> [!NOTE]
> Multiple AI Engines Supported
> GitHub Agentic Workflows supports multiple AI engines: GitHub Copilot CLI (default), 
> Anthropic Claude, and OpenAI Codex. All engines support the same tools, triggers, 
> and safe outputs. Choose the engine that best fits your needs and API access.

Documentation-only change. No functional impact.

Original prompt

This section details on the original issue you should resolve

<issue_title>[plan] Remove 'experimental' label from Claude/Codex engines documentation</issue_title>
<issue_description>## Context

As identified in discussion #12027 (Claude Code User Documentation Review), the "experimental" label on Claude and Codex engines creates unnecessary perception of instability, despite evidence of production viability (29 Claude workflows and 9 Codex workflows exist in the repository).

Current State

In docs/src/content/docs/reference/engines.md:12, there is a banner stating:

"Claude and Codex engines are available but marked as experimental. They are not documented here but can still be used by setting engine: claude or engine: codex in your workflow frontmatter. For production workflows, we recommend using the GitHub Copilot CLI engine."

Problem

  • Creates perception that Claude/Codex are unstable or risky
  • Discourages production use despite evidence of stability
  • States "not documented here" which is confusing (they ARE documented in the same file)
  • Contradicts repository evidence (29 Claude workflows in active use)

Objective

Replace the "experimental" warning with a neutral, welcoming note about multi-engine support.

Proposed Solution

Replace the current experimental banner with:

> [!NOTE]
> Multiple AI Engines Supported
> GitHub Agentic Workflows supports multiple AI engines: GitHub Copilot CLI (default), Anthropic Claude, and OpenAI Codex. All engines support the same tools, triggers, and safe outputs. Choose the engine that best fits your needs and API access.

Files to Modify

  • docs/src/content/docs/reference/engines.md (line 12 area)

Acceptance Criteria

  • Remove "experimental" language from engines.md
  • Replace with neutral multi-engine note
  • Verify documentation renders correctly
  • No changes to actual engine functionality (docs-only change)

References

AI generated by Plan Command for discussion #12027

Comments on the Issue (you are @copilot in this section)


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Replace experimental warning with neutral multi-engine support note.
Update Claude section to remove experimental designation.

Fixes #12007

Co-authored-by: pelikhan <4175913+pelikhan@users.noreply.github.com>
Copilot AI changed the title [WIP] Remove 'experimental' label from Claude/Codex engines documentation Remove experimental designation from Claude/Codex engines Jan 28, 2026
Copilot AI requested a review from pelikhan January 28, 2026 01:26
@pelikhan pelikhan closed this Jan 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[plan] Remove 'experimental' label from Claude/Codex engines documentation

2 participants