-
Notifications
You must be signed in to change notification settings - Fork 844
feat: add Custom OpenAI Provider support (BYOK) #204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: add Custom OpenAI Provider support (BYOK) #204
Conversation
- Add comprehensive Custom OpenAI Provider implementation - Support for custom API endpoints with OpenAI-compatible APIs - Secure credential storage with VS Code SecretStorage - Real-time connectivity testing and validation - UI for model configuration and management - Fix Edit Mode support for custom models with tool calling - Add comprehensive test coverage Fixes microsoft/vscode-copilot-release#7518
@microsoft-github-policy-service agree |
How is this different from #38 ? |
@Igorgro 🤷♂️ I submitted the PR following their documented process and attached the issue to it. Had the other been attached to the issue properly, I would have not bothered. I can’t speak to the differences in code. |
The ability to run arbitrary local models is obviously one of the first features the community was going to request and implement PRs for once co-pilot was open sourced. The fact that multiple PRs are being submitted for this key feature shouldn't come as any surprise if it is not being addressed. Microsoft/GitHub should have anticipated this before and they even posted this GitHub repository in 6/30/2025. |
1 similar comment
The ability to run arbitrary local models is obviously one of the first features the community was going to request and implement PRs for once co-pilot was open sourced. The fact that multiple PRs are being submitted for this key feature shouldn't come as any surprise if it is not being addressed. Microsoft/GitHub should have anticipated this before and they even posted this GitHub repository in 6/30/2025. |
@bartlettroscoe I believe the way that I have this plugin structured, you can test with locally hosted, as well. As long as it has an OpenAI compatible endpoint and is reachable by the machine running it, it should work. |
@jmcombs, does this PR remove the need to log into your GitHub account to use Copilot locally? |
@bartlettroscoe No, it does not. I know exactly where the few lines of code are that do the check because I commented it out for testing but, I suspect they'd reject the PR if I kept it out because I am pretty sure they require an active Copilot subscription for the privilege to use this extension. |
Summary
This PR implements comprehensive Custom OpenAI Provider support (Bring Your Own Key - BYOK) for GitHub Copilot Chat, enabling users to integrate any OpenAI-compatible API endpoint with their own API keys.
Features Added
🔧 Core Functionality
🎨 User Interface
🛠️ Technical Implementation
🎯 Edit Mode Support Fix
agentMode
based on tool calling support and token limitschat.edits2.enabled: true
Technical Details
Files Added/Modified
src/extension/byok/
- Complete BYOK implementationcommon/byokProvider.ts
- Core provider logic and model metadatavscode-node/byokContribution.ts
- VS Code integration and lifecyclevscode-node/byokStorageService.ts
- Secure credential storagevscode-node/byokUIService.ts
- Configuration UI and user interactionvscode-node/customOpenAIProvider.ts
- OpenAI-compatible API integrationnode/openAIEndpoint.ts
- HTTP client and API communicationvscode-node/test/
- Comprehensive test coverageKey Fix: Edit Mode Support
The critical fix for Edit Mode support was in
byokProvider.ts
:This ensures custom models with tool calling support and sufficient context window (>40k tokens) can use Edit Mode v2, matching the behavior of built-in OpenAI models.
Testing
Fixes
Closes microsoft/vscode-copilot-release#7518
Review Notes
@lramos15 This implements the Custom OpenAI Provider feature requested in the issue. The implementation includes:
The Edit Mode regression mentioned in the issue is fixed by properly calculating the
agentMode
capability for custom models based on their actual capabilities (tool calling + token limits) rather than hardcoding it tofalse
.Testing was completed using xAI with grok3 and grok4. I do not have access to any other OpenAI compatible language models. Testing with other OpenAI compatible endpoints is highly advised.
I will note that I felt it was a little slower on Edit mode than with Ask mode. I don't know if it's due to running v2 of edit mode and on insiders edition of VS Code.