Skip to content

added ollama support #10

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 4, 2025
Merged

added ollama support #10

merged 1 commit into from
May 4, 2025

Conversation

codingmoh
Copy link
Owner

🚀 Add Ollama Support to Open Codex

Summary

This PR introduces native support for Ollama as an alternative LLM backend alongside the existing Phi-4-mini integration.

✨ Key Changes

  • New Agent: OllamaAgent implemented using the official ollama Python client:

    • Supports system_prompt, configurable model_name, and temperature
    • Validates model presence and Ollama server availability
    • Uses ollama.chat with a one-shot response interface
  • AgentBuilder Enhancements:

    • get_ollama_agent() for easy instantiation
    • get_system_prompt() extracted for reuse
  • CLI Extensions (main.py):

    • Added --ollama flag to enable Ollama mode
    • Added --model flag to specify the model (e.g., llama3)
    • Updated one_shot_mode() to dynamically select the agent backend
    • Improved CLI help messages for clarity
  • Refactored phi_4_mini.py:

    • Moved download_model() below __init__ for better structure (functionality unchanged)

🔪 Example Usage

# Use default Phi-4-mini
open-codex "list files in this directory"

# Use Ollama with default model
open-codex --ollama "summarize this Python file"

# Use Ollama with a specific model
open-codex --ollama --model llama3 "explain what this shell script does"

📝 Notes

  • Ollama must be running locally (http://localhost:11434)
  • Models must be preloaded via ollama pull <model_name>

@codingmoh codingmoh added the enhancement New feature or request label Apr 30, 2025
@codingmoh codingmoh requested a review from Copilot April 30, 2025 00:38
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds native support for Ollama as an LLM backend alongside the existing Phi-4-mini integration. Key changes include:

  • Introducing a new OllamaAgent to interface with the official Ollama Python client.
  • Enhancing the AgentBuilder and CLI to support an --ollama flag and a configurable model name.
  • Refactoring the prompt reading and repositioning the download_model function in phi_4_mini.py.

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.

File Description
src/open_codex/main.py Added argparse enhancements for choosing agent backend and CLI improvements.
src/open_codex/agents/phi_4_mini.py Repositioned and re-added download_model to improve code structure.
src/open_codex/agents/ollama_restapi.py Introduces the new OllamaAgent with proper error handling and API integration.
src/open_codex/agent_builder.py Added get_system_prompt and get_ollama_agent for improved agent instantiation.

@codingmoh codingmoh linked an issue Apr 30, 2025 that may be closed by this pull request
Args:
system_prompt: The system prompt to use for generating responses
model_name: The name of the model to use (default: "llama3")
host: The host URL of the Ollama API (default: None, uses OLLAMA_HOST env var or http://localhost:11434)
Copy link
Owner Author

@codingmoh codingmoh Apr 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The host URL for Ollama is hardcoded as http://localhost:11434.!

Copy link

@codewithdark-git codewithdark-git left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue occurs when I try to load the Ollama model without running Ollama locally. When I run the command open-codex --ollama --model llama3 "create a tarball of the src dir", it loads the default model and checks if Ollama is running.

@codingmoh
Copy link
Owner Author

thanks @codewithdark-git - I'll check

@codingmoh codingmoh force-pushed the feat/ollama_api branch 3 times, most recently from 9db6c9f to db581c7 Compare May 3, 2025 23:46
- Added full support for Ollama via official Python client (`ollama>=0.4.8`)
- New agent class `OllamaAgent` for local LLMs like LLaMA3 via Ollama
- Refactored `AgentBuilder` to support multiple backends (phi, ollama)
- Improved CLI with:
  - `--ollama` and `--model` flags
  - Host configuration via `--ollama-host`
  - Structured prompt help and error handling
- Updated README with new usage examples, clearer feature list, and Ollama setup instructions
- Renamed `phi_4_mini.py` → `phi_4_mini_agent.py` for consistency
- Enhanced logging and feedback for model loading and errors
- Improved UX: response formatting, keypress actions (copy/execute/abort) with better feedback
@codingmoh codingmoh merged commit 36c3e01 into master May 4, 2025
@codingmoh
Copy link
Owner Author

Thanks again! - I added proper error handling and merged it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

allow to use locally installed OLLAMA
2 participants