Skip to content

Ollama support assumes localhost, ignoring config.toml #8240

@roger-s-flores

Description

@roger-s-flores

What version of Codex is running?

codex-cli 0.73.0

What subscription do you have?

Free

Which model were you using?

gpt-oss:20b

What platform is your computer?

Microsoft Windows NT 10.0.26200.0 x64

What issue are you seeing?

Connecting to Ollama on another home (LAN) computer does not work at all.

~/.codex/config.toml:
oss_provider = "ollama"

[model_providers.ollama]
name = "Ollama"
base_url = "http://192.168.1.50:11434/v1"
wire_api = "responses"

[profiles.gpt-oss]
model_provider = "ollama"
model = "gpt-oss:20b"

run with
codex -p gpt-oss

Trying a command leads to five reconnection attempts followed by the error:
■ stream disconnected before completion: error sending request for url (http://localhost:11434/v1/chat/completions)

Note the ip address specified by base_url is replaced with localhost.

I don't think a remote base_url is supported. The tests in the rust code I see only use localhost.

I know the config.toml is read because if I change the line to
wire_api = "responses7"
I get the reasonable error that proves codex is reading config.toml:
Error: unknown variant responses7, expected responses or chat
in model_providers.ollama.wire_api

Ollama is running fine:
$ curl -i http://192.168.8.81:11434
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Date: Thu, 18 Dec 2025 06:23:44 GMT
Content-Length: 17

Ollama is running

codex --oss similarly does not work. I was hoping the [model_providers.ollama] would be noticed and use the base_url specified because --oss is more convenient to type.
$ codex --oss
Error: OSS setup failed: No running Ollama server detected.

What steps can reproduce the bug?

Use ~/.codex/config.toml with a base_url that isn't localhost:
oss_provider = "ollama"

[model_providers.ollama]
name = "Ollama"
base_url = "http://192.168.1.50:11434/v1"
wire_api = "responses"

[profiles.gpt-oss]
model_provider = "ollama"
model = "gpt-oss:20b"

run codex:
codex -p gpt-oss

type any command that needs to connect to the remote ollama server

Notice five reconnection attempts followed by the message
■ stream disconnected before completion: error sending request for url (http://localhost:11434/v1/chat/completions)

What is the expected behavior?

The remote ollama server running at base_url should be connected to

Additional information

ollama version 0.13.4

Metadata

Metadata

Assignees

No one assigned

    Labels

    CLIIssues related to the Codex CLIbugSomething isn't workingcustom-modelIssues related to custom model providers (including local models)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions