A quick tool to generate an OpenCode provider configuration file by querying an OpenAI-compatible API endpoint.
- Queries the
/modelsendpoint of any OpenAI-compatible API (e.g., LiteLLM, vLLM, OpenRouter) - Discovers all available models automatically
- Generates a valid OpenCode
opencode.jsonprovider config
At this stage, it isn't much more than what it is
Download the latest binary from Releases for MacOS (Apple Silicon, Intel), Windows and Linux:
e.g.
# Linux
curl -LO https://github.com/torquemad/opencode-provider-config/releases/latest/download/opencode-provider-config-linux-amd64
chmod +x opencode-provider-config-linux-amd64export LITELLM_API_KEY="your-api-key"
# Basic usage
./opencode-provider-config --base-url https://your-llm-proxy.com/v1 output.json
# With custom provider name and key, writing to OpenCode config location
./opencode-provider-config \
--base-url https://your-llm-proxy.com/v1 \
--provider-name "My LLM Proxy" \
--provider-key my-proxy \
~/.config/opencode/opencode.jsonOr clone this repo and run from source:
go build -o opencode-provider-config .
./opencode-provider-config \
--base-url https://your-llm-proxy.com/v1 \
--provider-name "My LLM Proxy" \
--provider-key my-proxy \
~/.config/opencode/opencode.jsonThe output JSON file can be placed in an appropriate OpenCode config location such as ~/.config/opencode/opencode.json for global user config.
Example output:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"litellm": {
"npm": "@ai-sdk/openai-compatible",
"name": "LiteLLM",
"options": {
"baseURL": "https://your-llm-proxy.com/v1"
},
"models": {
"gpt-4o": { "name": "gpt-4o" },
"claude-3-5-sonnet": { "name": "claude-3-5-sonnet" }
}
}
}
}Open a PR, if merged, tag a release.
make release VERSION=1.0.0 # explicit version
make release-major # bump major (1.0.0 → 2.0.0)
make release-minor # bump minor (1.0.0 → 1.1.0)
make release-patch # bump patch (1.0.0 → 1.0.1)Builds binaries for linux/darwin/windows, generates checksums, tags, and creates GitHub Release.
Apache 2.0