Skip to content

Commit 36c3e01

Browse files
committed
feat: Add Ollama integration and improve CLI usability and documentation
- Added full support for Ollama via official Python client (`ollama>=0.4.8`) - New agent class `OllamaAgent` for local LLMs like LLaMA3 via Ollama - Refactored `AgentBuilder` to support multiple backends (phi, ollama) - Improved CLI with: - `--ollama` and `--model` flags - Host configuration via `--ollama-host` - Structured prompt help and error handling - Updated README with new usage examples, clearer feature list, and Ollama setup instructions - Renamed `phi_4_mini.py` → `phi_4_mini_agent.py` for consistency - Enhanced logging and feedback for model loading and errors - Improved UX: response formatting, keypress actions (copy/execute/abort) with better feedback
1 parent 26729eb commit 36c3e01

File tree

7 files changed

+736
-284
lines changed

7 files changed

+736
-284
lines changed

README.md

Lines changed: 43 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -8,40 +8,56 @@
88

99
---
1010

11-
**Open Codex** is a fully open-source command-line AI assistant inspired by OpenAI Codex, supporting local language models like `phi-4-mini`.
11+
**Open Codex** is a fully open-source command-line AI assistant inspired by OpenAI Codex, supporting local language models like `phi-4-mini` and **full integration with Ollama**.
1212

13-
No API key is required. Everything runs locally.
13+
🧠 **Runs 100% locally** – no OpenAI API key required. Everything works offline.
1414

15-
Supports:
16-
- **One-shot mode**: `open-codex "list all folders"` -> returns shell command
17-
- 🧠 Local-only execution using supported OS models (currently `phi-4-mini`)
15+
---
16+
17+
## Supports
18+
19+
* **One-shot mode**: `open-codex "list all folders"` -> returns shell command
20+
* **Ollama integration** for (e.g., LLaMA3, Mistral)
21+
* Native execution on **macOS, Linux, and Windows**
1822

1923
---
2024
## ✨ Features
2125

22-
- Natural Language to Shell Command (via local models)
23-
- Works on macOS, Linux, and Windows (Python-based)
24-
- Confirmation before execution
25-
- Add to clipboard / abort / execute prompt
26-
- One-shot interaction mode (interactive and function-calling coming soon)
26+
- Natural Language → Shell Command (via local or Ollama-hosted LLMs)
27+
- Local-only execution: no data sent to the cloud
28+
- Confirmation before running any command
29+
- Option to copy to clipboard / abort / execute
2730
- Colored terminal output for better readability
31+
- Ollama support: use advanced LLMs with `--ollama --model llama3`
32+
33+
### 🔍 Example with Ollama:
34+
35+
```bash
36+
open-codex --ollama --model llama3 "find all JPEGs larger than 10MB"
37+
```
38+
39+
Codex will:
40+
41+
1. Send your prompt to the Ollama API (local server, e.g. on `localhost:11434`)
42+
2. Return a shell command suggestion (e.g., `find . -name "*.jpg" -size +10M`)
43+
3. Prompt you to execute, copy, or abort
44+
45+
> 🛠️ You must have [Ollama](https://ollama.com) installed and running locally to use this feature.
2846
2947
---
3048

3149
## 🧱 Future Plans
3250

33-
- Interactive, context aware mode
51+
- Interactive, context-aware mode
3452
- Fancy TUI with `textual` or `rich`
35-
- Add support for additional OSS Models
3653
- Full interactive chat mode
3754
- Function-calling support
38-
- Voice input via Whisper
39-
- Command history and undo
55+
- Whisper-based voice input
56+
- Command history & undo
4057
- Plugin system for workflows
4158

4259
---
4360

44-
4561
## 📦 Installation
4662

4763

@@ -53,42 +69,46 @@ brew install open-codex
5369
```
5470

5571

56-
### 🔹 Option 2: Install via pipx (cross-platform)
72+
### 🔹 Option 2: Install via pipx (Cross-platform)
5773

5874
```bash
5975
pipx install open-codex
6076
```
6177

62-
### 🔹 Option 3: Clone & Install locally
78+
### 🔹 Option 3: Clone & install locally
6379

6480
```bash
6581
git clone https://github.com/codingmoh/open-codex.git
6682
cd open_codex
6783
pip install .
6884
```
6985

70-
71-
Once installed, you can use the `open-codex` CLI globally.
86+
Once installed, use the `open-codex` CLI globally.
7287

7388
---
7489

75-
## 🚀 Usage
90+
## 🚀 Usage Examples
7691

77-
### One-shot mode
92+
### ▶️ One-shot mode
7893

7994
```bash
8095
open-codex "untar file abc.tar"
8196
```
82-
8397
✅ Codex suggests a shell command
8498
✅ Asks for confirmation / add to clipboard / abort
8599
✅ Executes if approved
86100

101+
### ▶️ Using Ollama
102+
103+
```bash
104+
open-codex --ollama --model llama3 "delete all .DS_Store files recursively"
105+
```
106+
87107
---
88108

89109
## 🛡️ Security Notice
90110

91-
All models run locally. Commands are only executed after explicit approval.
111+
All models run **locally**. Commands are executed **only after your explicit confirmation**.
92112

93113
---
94114

@@ -105,4 +125,3 @@ MIT
105125
---
106126

107127
❤️ Built with love and caffeine by [codingmoh](https://github.com/codingmoh).
108-

pyproject.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ requires-python = ">=3.11"
77
dependencies = [
88
"huggingface-hub>=0.30.2",
99
"llama-cpp-python>=0.3.8",
10+
"ollama>=0.4.8",
1011
"prompt_toolkit",
1112
"pyinstaller>=6.13.0",
1213
"pyperclip>=1.9.0",

src/open_codex/agent_builder.py

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,29 @@
11
from importlib.resources import files
22

3-
from open_codex.agents.phi_4_mini import AgentPhi4Mini
43
from open_codex.interfaces.llm_agent import LLMAgent
54

65
class AgentBuilder:
6+
7+
@staticmethod
8+
def get_system_prompt() -> str:
9+
return files("open_codex.resources") \
10+
.joinpath("prompt.txt") \
11+
.read_text(encoding="utf-8")
712

813
@staticmethod
9-
def get_agent() -> LLMAgent:
10-
system_prompt = files("open_codex.resources").joinpath("prompt.txt").read_text(encoding="utf-8")
11-
return AgentPhi4Mini(system_prompt=system_prompt)
14+
def get_phi_agent() -> LLMAgent:
15+
from open_codex.agents.phi_4_mini_agent import Phi4MiniAgent
16+
system_prompt = AgentBuilder.get_system_prompt()
17+
return Phi4MiniAgent(system_prompt=system_prompt)
1218

19+
@staticmethod
20+
def get_ollama_agent(model: str, host: str) -> LLMAgent:
21+
from open_codex.agents.ollama_agent import OllamaAgent
22+
system_prompt = AgentBuilder.get_system_prompt()
23+
return OllamaAgent(system_prompt=system_prompt,
24+
model_name=model,
25+
host=host)
26+
1327
@staticmethod
1428
def read_file(file_path: str) -> str:
1529
with open(file_path, 'r') as file:

src/open_codex/agents/ollama_agent.py

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
from typing import List, Dict
2+
import logging
3+
import ollama
4+
5+
from open_codex.interfaces.llm_agent import LLMAgent
6+
7+
# Configure logger
8+
logger = logging.getLogger(__name__)
9+
10+
class OllamaAgent(LLMAgent):
11+
"""
12+
Agent that connects to Ollama to access local language models
13+
using the official Python client.
14+
"""
15+
16+
def __init__(self,
17+
system_prompt: str,
18+
model_name: str,
19+
host: str,
20+
temperature: float = 0.2,
21+
max_tokens: int = 500):
22+
"""
23+
Initialize the Ollama agent.
24+
25+
Args:
26+
system_prompt: The system prompt to use for generating responses
27+
model_name: The name of the model to use (default: "llama3")
28+
host: The host URL of the Ollama API (default: None, uses OLLAMA_HOST env var or http://localhost:11434)
29+
temperature: The temperature to use for generation (default: 0.2)
30+
max_tokens: The maximum number of tokens to generate (default: 500)
31+
"""
32+
self.system_prompt = system_prompt
33+
self.model_name = model_name
34+
self.host = host
35+
36+
self.temperature = temperature
37+
self.max_tokens = max_tokens
38+
self._ollama_client = ollama.Client(host=self.host)
39+
40+
def _check_ollama_available(self) -> None:
41+
"""Check if Ollama server is available and the model exists."""
42+
try:
43+
# List models to check connection
44+
models: ollama.ListResponse = self._ollama_client.list()
45+
46+
available_models = [model.model for model in models.models if model.model is not None]
47+
48+
if not available_models:
49+
logger.error(f"No models found in Ollama. You may need to pull the model with: ollama pull {self.model_name}")
50+
elif self.model_name not in available_models:
51+
logger.error(f"Model '{self.model_name}' not found in Ollama. Available models: {', '.join(available_models)}")
52+
logger.error(f"You can pull the model with: ollama pull {self.model_name}")
53+
54+
except ConnectionError as e:
55+
logger.error(f"Could not connect to Ollama server.")
56+
logger.error(
57+
f"Make sure Ollama is running at {self.host} or install it from https://ollama.com"
58+
)
59+
raise ConnectionError(
60+
f"Could not connect to Ollama server. "
61+
f"Make sure Ollama is running at {self.host} or install it from https://ollama.com"
62+
)
63+
64+
def one_shot_mode(self, user_input: str) -> str:
65+
"""
66+
Generate a one-shot response to the user input.
67+
68+
Args:
69+
user_input: The user's input prompt
70+
71+
Returns:
72+
The generated response as a string
73+
"""
74+
self._check_ollama_available()
75+
messages = [
76+
{"role": "system", "content": self.system_prompt},
77+
{"role": "user", "content": user_input}
78+
]
79+
80+
response = self._generate_completion(messages)
81+
return response.strip()
82+
83+
def _generate_completion(self, messages: List[Dict[str, str]]) -> str:
84+
"""
85+
Generate a completion using the Ollama API.
86+
87+
Args:
88+
messages: The conversation history as a list of message dictionaries
89+
90+
Returns:
91+
The generated text response
92+
"""
93+
try:
94+
95+
response = ollama.chat(
96+
model=self.model_name,
97+
messages=messages,
98+
options={
99+
"temperature": self.temperature,
100+
"num_predict": self.max_tokens,
101+
}
102+
)
103+
104+
if "message" in response and "content" in response["message"]:
105+
return response["message"]["content"]
106+
else:
107+
raise ValueError(f"Unexpected response format from Ollama API: {response}")
108+
109+
except Exception as e:
110+
raise ConnectionError(f"Error communicating with Ollama: {str(e)}")

0 commit comments

Comments
 (0)