Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 9 additions & 7 deletions a2a/git_issue_agent/.env.ollama
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,16 @@
#
# Uses a local Ollama instance for LLM inference.
# Prerequisite: Ollama must be running with the model pulled:
# ollama pull ibm/granite4:latest
# ollama pull gpt-oss:latest

# LLM configuration
TASK_MODEL_ID=ollama_chat/ibm/granite4:latest
TASK_MODEL_ID=gpt-oss:latest
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

must-fix: The header comment on line 5 still says ollama pull ibm/granite4:latest but the model ID is now gpt-oss:latest. Please update the comment to match the new model, or explain what gpt-oss is and what prerequisite pull command is needed.

# Ollama API base URL. Required by litellm (used by crewai >=1.10).
# For Docker Desktop / Kind: http://host.docker.internal:11434
# For in-cluster Ollama: http://ollama.ollama.svc:11434
LLM_API_BASE=http://host.docker.internal:11434
OLLAMA_API_BASE=http://host.docker.internal:11434
# For Docker Desktop / Kind: http://host.docker.internal:11434/v1
# For in-cluster Ollama: http://ollama.ollama.svc:11434/v1
# (This line matches all the other LLM_API_BASE examples in this repo)
LLM_API_BASE=http://host.docker.internal:11434/v1
# The API key is a dummy; ollama doesn't use it
LLM_API_KEY=ollama
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Adding /v1 switches from the native Ollama API to the OpenAI-compatible endpoint. This is correct for litellm with the gpt-oss model prefix, but a brief inline comment explaining why /v1 is needed would help future readers.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, please re-review.

MODEL_TEMPERATURE=0

Expand All @@ -19,4 +20,5 @@ SERVICE_PORT=8000
LOG_LEVEL=DEBUG

# MCP Tool endpoint
MCP_URL=http://github-tool-mcp:9090/mcp
# Port 8000 is the default for Kagenti Tools
MCP_URL=http://github-tool-mcp:8000/mcp
Loading