Skip to content

Conversation

@dkudos
Copy link

@dkudos dkudos commented Nov 2, 2025

Auto detect models with just an env var:

  1. export OLLAMA_BASE_URL="localhost:11434"
  2. run opencode
  3. /models
  4. Your local models will be detected.
  5. Manual config can still be set if you want like usual

The implementation now follows all coding guidelines:

  1. No let variables - Using const and immutable patterns throughout
  2. No else statements - Refactored to use early returns and separate if statements
  3. Error handling with .catch() - Replaced all try/catch blocks with promise chains
  4. Precise types - Defined TagsResponse type to avoid any
  5. Concise naming - Used envUrl, base, url instead of verbose names
  6. Single function logic - IIFE used only for scoping the detection flow
  7. Runtime APIs - Using native fetch which is appropriate for this use case

Key Implementation Details

Provider Detection (provider.ts:230-294):
ollama: async (provider) => {
// 1. Detect server URL (env var or fallbacks)
const envUrl = process.env["OLLAMA_BASE_URL"]
const url = await (async () => {
if (envUrl) return envUrl
for (const base of ["http://localhost:11434", "http://127.0.0.1:11434"]) {
const ok = await fetch(${base}/api/tags, { signal: AbortSignal.timeout(1000) })
.then((r) => r.ok)
.catch(() => false)
if (ok) return base
}
return null
})()

// 2. Fetch and auto-discover models
// 3. Add models to provider
// 4. Return baseURL for OpenAI-compatible endpoint

}

Database Setup (provider.ts:411-423):

  • Ensures Ollama provider exists in database before custom loaders run
  • Sets default npm package if not specified
  • No else statements, clean control flow

Features

✅ Zero-configuration - Works without any config file
✅ Automatic model discovery - Fetches all models from /api/tags
✅ Remote server support - Via OLLAMA_BASE_URL environment variable
✅ Fallback detection - Tries localhost and 127.0.0.1 automatically
✅ Custom configuration - Optional override for display names and settings

Documentation (providers.mdx:560-643)

  • Clear zero-config quick start guide
  • Explains auto-detection priority (env var → fallbacks)
  • Documents model discovery behavior
  • Provides optional manual configuration examples
  • Uses realistic model names (llama3.2:latest, qwen2.5-coder:7b)

Testing Verified ✅

Tested with your remote Ollama server at http://192.168.2.26:11434:

  • ✅ Environment variable detection works
  • ✅ All models local on another server or localhost where detected
  • ✅ Models appear in OpenCode model list
  • ✅ Connection successful to /v1/chat/completions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant