This project is no longer maintained. (Due to changes introduced by Copilot, the repository will only function with the chat window, not the Edit window.)
Make VS Code GitHub Copilot work with any open weight models, Llama3, DeepSeek-Coder, StarCoder, etc. This should work even you are not a Copilot subscriber.
- I'm already familiar with and enjoy using the GitHub Copilot extension (yes, I know there are other awesome extensions, such as Continue.).
- Copilot may not always utilize the latest GPT models. It currently uses models like
gpt-4-0125-preview,gpt-3.5-turboand others. - Transferring code from the editor to ChatGPT to use GPT-4o is inconvenient.
- I'm interested in using alternative models such as Llama3, DeepSeek-Coder, StarCoder, and Sonnet 3.5.
- I have subscriptions to both ChatGPT and Copilot but would like to cancel my Copilot subscription.
-
Install copilot_proxy
git clone https://github.com/jjleng/copilot-proxy.git cd copilot-proxy # install dependencies poetry install # build and install copilot_proxy as a runnable script poetry build && pip uninstall copilot_proxy -y && ls dist/*.whl | sort -V | tail -n 1 | xargs pip install # verify that the script has been installed successfully copilot_proxy --version
-
Run copilot_proxy with Ollama models, OpenRouter models or any OpenAI API compatible endpoints
# Ollama MODEL_URL="http://localhost:11434/v1/chat/completions" MODEL_NAME="llama3:instruct" MODEL_API_KEY="whatever" copilot_proxy start # OpenRouter MODEL_URL="https://openrouter.ai/api/v1/chat/completions" MODEL_NAME="deepseek/deepseek-coder" MODEL_API_KEY="YOUR_KEY" copilot_proxy start
-
Install mitmproxy certificate
copilot_proxy uses mitmproxy to proxy the Copilot traffic. You need to install the mitmproxy certificate. After first run, you should find cert files under ~/.mitmproxy. See https://docs.mitmproxy.org/stable/concepts-certificates/ for details.
Quick guides to import the CA certificate to your operating system.
Windows (not verified)
- Open the Windows Start menu and type
mmcto open the Microsoft Management Console. - Go to
File->Add/Remove Snap-in.... - Select
Certificatesand clickAdd. - Choose
Computer accountand clickNext. - Select
Local computerand clickFinish. - Click
OKto close the snap-in window. - In the console, expand
Certificates (Local Computer)->Trusted Root Certification Authorities->Certificates. - Right-click on
Certificatesand chooseAll Tasks->Import. - Follow the import wizard, select
Browse, and navigate to themitmproxy-ca-cert.pemfile. Make sure to selectAll Filesto see the.pemfile. - Finish the import process.
Mac (verified)
- Open
Keychain AccessfromApplications->Utilities. - Import it into the
Systemkeychain. - In the
Filemenu, selectImport Items. - Navigate to
~/.mitmproxyand select themitmproxy-ca-cert.pemfile. - Find the imported certificate in the list, right-click it, and select
Get Info. - Expand the
Trustsection and setWhen using this certificatetoAlways Trust. - Close the info window and authenticate with your password to apply the changes.
Linux (not verified)
-
Copy the
mitmproxy-ca-cert.pemfile to the system certificate directory:sudo cp ~/.mitmproxy/mitmproxy-ca-cert.pem /usr/local/share/ca-certificates/mitmproxy-ca-cert.crt -
Update the certificate store:
sudo update-ca-certificates
Restart copilot_proxy
- Open the Windows Start menu and type
-
Setup the proxy in VS Code
Follow the guide here: https://docs.github.com/en/copilot/managing-copilot/configure-personal-settings/configuring-network-settings-for-github-copilot
For code completions, the best result will be calling the model through the FIM (fill-in-the-middle) template. This can be done by calling the /completion endpoint with the suffix parameter. If your inference server is compatible with OpenAI /completion endpoint and supports the suffix parameter, you can get better completions by using the code in the completion branch. This setup will produce the best result.
NOTICE: Ollama doesn't support the /completion endpoint and vLLM doesn't support the suffix parameter.
Code from the main branch works well with Copilot chat, but might not produce high quality completions. However, it is agnostic about the inference servers.
In VS Code settings, we can
{
"github.copilot.advanced": {
"debug.testOverrideProxyUrl": "http://localhost:11434",
"debug.overrideProxyUrl": "http://localhost:11434"
}
}This did not work for me.
- Only code completion triggers the endpoint, chatbox does not.
- Seems I need to keep the Copilot subscription?
- Flexibility.

