Skip to content

Use Local Running Custom Models

Shx Guo edited this page Feb 6, 2024 · 1 revision

Copilot for Xcode supports using chat/embedding models that have an OpenAI-compatible web API. To use these models, it's as easy as adding the model to the app and changing the chat/embedding provider to the model.

It's recommended to use LM Studio to run models locally.

To use it, you can

  1. Download the latest version from their website.
  2. Download a model you want from either the home tab or search tab.
  3. Go to local server tab, select the model you just downloaded to load it.
  4. Click "Start Server" to start it. You can find the base url in the "Client Code Example", it looks like http://localhost:1234.
Screenshot 2024-02-06 at 14 55 56
  1. Setup the model in "Service - Chat Models". The model name field can be any value.
Screenshot 2023-09-07 at 14 27 42
  1. Change the chat feature provider to the new model in "Feature - Chat"