-
Notifications
You must be signed in to change notification settings - Fork 846
Make adding other OpenAI-Compatible providers possible #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@lex-ibm I don't see anything mentioning api key for custom provider. Is it implied somehow by the existing codebase? |
@En3Tho the API key part of it is already managed by the parent class |
My bad... |
I think what you're doing already is correct, yeah. |
I tested this modification, and it works well, but there are a few details. "github.copilot.chat.openAICompatibleProviders": [
{
"name": "douchat",
"url": "https://xxx/v1",
}
], After
I got it. It need toolcalling ablility. Thank you very much for your work — it's amazing! |
Hello lex-ibm, i have make a pr to your project. Now we can use UI to control more provider easily. You can download it in: https://drive.google.com/file/d/1yFuBnmqmVXvDhv3cPamJy5g8ynR2tjrH/view?usp=sharing |
@lex-ibm thanks a lot for this PR 🙏 @lramos15 is the right person to review this one, but he is out on vacation this week. So some initial comments from me:
![]() |
@isidorn Hi isidorn, here is my pr lex-ibm#1. And i have do what you said. Like not setting. And in the release: https://github.com/relic-yuexi/vscode-copilot-chat/releases/tag/0.1 you can install it to test. I have test it can work for me. |
@relic-yuexi thank you for offering. However let's first give some time for @lex-ibm to respond, as he is the initial author of the PR. |
Nice feature! I didn't know this Copilot Chat extension is open source, which makes this possible. I thought it was closed source. |
+1 for this feature. |
Avoid layoffs merging amazing PRs. Nice feature! |
+1 from my side |
Sorry, I was on vacations last week. I'll take a look at @relic-yuexi's PR and the feedback provided. |
When I use the vsix compiled from this change and configure the openai compatible server. The model doesn't work.
|
In my try. vLLM hermas tool-call-parser can't work. sgalng qwen2 tool-call-parser is good! one-api seems can't use tool, you can find in this https://github.com/songquanpeng/one-api/issues?q=is%3Aissue%20tool-call
can use agent, but
can't. |
did you find any model served by vllm can be used for this? I tried llama3.1, phi, qwen2.5-coder with vllm, none of them work. |
you should try sglang with qwen25 tool-parser. vllm hermas has some error
---- Replied Message ----
From ***@***.***> Date 07/09/2025 08:02 To ***@***.***> Cc ***@***.***>***@***.***> Subject Re: [microsoft/vscode-copilot-chat] Make adding other OpenAI-Compatible providers possible (PR #38)
zhuangqh left a comment (microsoft/vscode-copilot-chat#38)
When I use the vsix compiled from this change and configure the openai compatible server. The model doesn't work. It gives me this error when i chat with copilot. Do you guys meet this issue?
Sorry, no response was returned.
In my try.
vLLM hermas tool-call-parser can't work.
sgalng qwen2 tool-call-parser is good!
one-api seems can't use tool, you can find in this https://github.com/songquanpeng/one-api/issues?q=is%3Aissue%20tool-call
python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser qwen3 --served-model-name qwen332b --tp 8 --tool-call-parser qwen25
can use agent, but
vllm serve Qwen/Qwen3-32B \ --tensor-parallel-size 8 \ --enable_prefix_caching \ --served-model-name qwen332b \ --enable-auto-tool-choice \ --tool-call-parser hermes \ --reasoning-parser deepseek_r1
can't.
did you find any model served by vllm can be used for this? I tried llama3.1, phi, qwen2.5-coder with vllm, none of them work.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@relic-yuexi I figure out the cause why vllm server can't be used as a custom endpoint. I create an issue here. |
Before reviewing and potentially merging this PR we want to first finalize the Language Model Provider API - this should happen in the next couple of weeks microsoft/vscode#250007 Thanks for your patience 🙏 |
19e8a2c
to
7af5b72
Compare
No description provided.