Bug fix: Fix the Git Service when running on Ollama#219
Bug fix: Fix the Git Service when running on Ollama#219esnible wants to merge 2 commits intokagenti:mainfrom
Conversation
Signed-off-by: Ed Snible <snible@us.ibm.com>
rubambiza
left a comment
There was a problem hiding this comment.
Review Summary
The MCP port fix (9090 -> 8000) is correct and matches Kagenti's default tool port. The /v1 URL suffix and model ID change look intentional for OpenAI-compatible mode via litellm. One issue: the file header comment is now stale.
Areas reviewed: Config/env
Commits: 1 commit, signed-off
CI status: All passing (10/10)
|
|
||
| # LLM configuration | ||
| TASK_MODEL_ID=ollama_chat/ibm/granite4:latest | ||
| TASK_MODEL_ID=gpt-oss:latest |
There was a problem hiding this comment.
must-fix: The header comment on line 5 still says ollama pull ibm/granite4:latest but the model ID is now gpt-oss:latest. Please update the comment to match the new model, or explain what gpt-oss is and what prerequisite pull command is needed.
| # For in-cluster Ollama: http://ollama.ollama.svc:11434/v1 | ||
| LLM_API_BASE=http://host.docker.internal:11434/v1 | ||
| OLLAMA_API_BASE=http://host.docker.internal:11434/v1 | ||
| LLM_API_KEY=ollama |
There was a problem hiding this comment.
suggestion: Adding /v1 switches from the native Ollama API to the OpenAI-compatible endpoint. This is correct for litellm with the gpt-oss model prefix, but a brief inline comment explaining why /v1 is needed would help future readers.
There was a problem hiding this comment.
Thanks, please re-review.
Signed-off-by: Ed Snible <esnible@acm.org>
Summary
We supply a configuration for the Git Service when running on Kagenti under Ollama, but it doesn't work.
For example, the chat query
What do you think about https://github.com/kagenti/agent-examples/issues/218 ?yieldsThe default port for an MCP tool in Kagenti is 8000, but this Agent example assumes the MCP server will be deployed on 9090.
(Optional) Testing Instructions