Skip to content

Codewithdark git add/qwen coder #11

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 63 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,40 +8,59 @@

---

**Open Codex** is a fully open-source command-line AI assistant inspired by OpenAI Codex, supporting local language models like `phi-4-mini`.
**Open Codex** is a fully open-source command-line AI assistant inspired by OpenAI Codex, supporting local language models like `phi-4-mini` and **full integration with Ollama**.

No API key is required. Everything runs locally.
🧠 **Runs 100% locally** – no OpenAI API key required. Everything works offline.

Supports:
- **One-shot mode**: `open-codex "list all folders"` -> returns shell command
- 🧠 Local-only execution using supported OS models (currently `phi-4-mini`)
---

## Supports

* **One-shot mode**: `open-codex "list all folders"` -> returns shell command
* **Ollama integration** for (e.g., LLaMA3, Mistral)
* Native execution on **macOS, Linux, and Windows**
* qwen1.5-7b-chat (auth required, enhanced for coding tasks)

---
## ✨ Features

- Natural Language to Shell Command (via local models)
- Works on macOS, Linux, and Windows (Python-based)
- Confirmation before execution
- Add to clipboard / abort / execute prompt
- One-shot interaction mode (interactive and function-calling coming soon)
- Natural Language → Shell Command (via local or Ollama-hosted LLMs)
- Local-only execution: no data sent to the cloud
- Confirmation before running any command
- Option to copy to clipboard / abort / execute
- Colored terminal output for better readability
- Ollama support: use advanced LLMs with `--ollama --model llama3`
- Smart command validation and error handling
- Real-time command output streaming

### 🔍 Example with Ollama:

```bash
open-codex --ollama --model llama3 "find all JPEGs larger than 10MB"
```

Codex will:

1. Send your prompt to the Ollama API (local server, e.g. on `localhost:11434`)
2. Return a shell command suggestion (e.g., `find . -name "*.jpg" -size +10M`)
3. Prompt you to execute, copy, or abort

> 🛠️ You must have [Ollama](https://ollama.com) installed and running locally to use this feature.

---

## 🧱 Future Plans

- Interactive, context aware mode
- Interactive, context-aware mode
- Fancy TUI with `textual` or `rich`
- Add support for additional OSS Models
- Full interactive chat mode
- Function-calling support
- Voice input via Whisper
- Command history and undo
- Whisper-based voice input
- Command history & undo
- Plugin system for workflows

---


## 📦 Installation


Expand All @@ -53,42 +72,64 @@ brew install open-codex
```


### 🔹 Option 2: Install via pipx (cross-platform)
### 🔹 Option 2: Install via pipx (Cross-platform)

```bash
pipx install open-codex
```

### 🔹 Option 3: Clone & Install locally
### 🔹 Option 3: Clone & install locally

```bash
git clone https://github.com/codingmoh/open-codex.git
cd open_codex
pip install .
```


Once installed, you can use the `open-codex` CLI globally.
Once installed, use the `open-codex` CLI globally.

---

## 🚀 Usage
## 🚀 Usage Examples

### One-shot mode

Basic usage with default model (phi-4-mini):
```bash
open-codex "untar file abc.tar"
open-codex "list all python files"
```

Using Qwen model for enhanced coding tasks:
```bash
open-codex "list all python files"
```

Using Qwen model for enhanced coding tasks:
```bash
# First, set your Hugging Face token
export HUGGINGFACE_TOKEN=your_token_here

# Then use the Qwen model
open-codex --model qwen-2.5-coder "find python files modified today"

# Or provide token directly
open-codex --model qwen-2.5-coder --hf-token your_token_here "your command"
```
✅ Codex suggests a shell command
✅ Asks for confirmation / add to clipboard / abort
✅ Executes if approved

### ▶️ Using Ollama

```bash
open-codex --ollama --model llama3 "delete all .DS_Store files recursively"
```

---

## 🛡️ Security Notice

All models run locally. Commands are only executed after explicit approval.
All models run **locally**. Commands are executed **only after your explicit confirmation**.

---

Expand All @@ -105,4 +146,3 @@ MIT
---

❤️ Built with love and caffeine by [codingmoh](https://github.com/codingmoh).

1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ requires-python = ">=3.11"
dependencies = [
"huggingface-hub>=0.30.2",
"llama-cpp-python>=0.3.8",
"ollama>=0.4.8",
"prompt_toolkit",
"pyinstaller>=6.13.0",
"pyperclip>=1.9.0",
Expand Down
25 changes: 21 additions & 4 deletions src/open_codex/agent_builder.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,32 @@
from importlib.resources import files
from typing import Literal, Optional

from open_codex.agents.phi_4_mini import AgentPhi4Mini
from open_codex.interfaces.llm_agent import LLMAgent

ModelType = Literal["phi-4-mini", "qwen-2.5-coder"]

class AgentBuilder:

@staticmethod
def get_system_prompt() -> str:
return files("open_codex.resources") \
.joinpath("prompt.txt") \
.read_text(encoding="utf-8")

@staticmethod
def get_agent() -> LLMAgent:
system_prompt = files("open_codex.resources").joinpath("prompt.txt").read_text(encoding="utf-8")
return AgentPhi4Mini(system_prompt=system_prompt)
def get_phi_agent() -> LLMAgent:
from open_codex.agents.phi_4_mini_agent import Phi4MiniAgent
system_prompt = AgentBuilder.get_system_prompt()
return Phi4MiniAgent(system_prompt=system_prompt)

@staticmethod
def get_ollama_agent(model: str, host: str) -> LLMAgent:
from open_codex.agents.ollama_agent import OllamaAgent
system_prompt = AgentBuilder.get_system_prompt()
return OllamaAgent(system_prompt=system_prompt,
model_name=model,
host=host)

@staticmethod
def read_file(file_path: str) -> str:
with open(file_path, 'r') as file:
Expand Down
110 changes: 110 additions & 0 deletions src/open_codex/agents/ollama_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
from typing import List, Dict
import logging
import ollama

from open_codex.interfaces.llm_agent import LLMAgent

# Configure logger
logger = logging.getLogger(__name__)

class OllamaAgent(LLMAgent):
"""
Agent that connects to Ollama to access local language models
using the official Python client.
"""

def __init__(self,
system_prompt: str,
model_name: str,
host: str,
temperature: float = 0.2,
max_tokens: int = 500):
"""
Initialize the Ollama agent.

Args:
system_prompt: The system prompt to use for generating responses
model_name: The name of the model to use (default: "llama3")
host: The host URL of the Ollama API (default: None, uses OLLAMA_HOST env var or http://localhost:11434)
temperature: The temperature to use for generation (default: 0.2)
max_tokens: The maximum number of tokens to generate (default: 500)
"""
self.system_prompt = system_prompt
self.model_name = model_name
self.host = host

self.temperature = temperature
self.max_tokens = max_tokens
self._ollama_client = ollama.Client(host=self.host)

def _check_ollama_available(self) -> None:
"""Check if Ollama server is available and the model exists."""
try:
# List models to check connection
models: ollama.ListResponse = self._ollama_client.list()

available_models = [model.model for model in models.models if model.model is not None]

if not available_models:
logger.error(f"No models found in Ollama. You may need to pull the model with: ollama pull {self.model_name}")
elif self.model_name not in available_models:
logger.error(f"Model '{self.model_name}' not found in Ollama. Available models: {', '.join(available_models)}")
logger.error(f"You can pull the model with: ollama pull {self.model_name}")

except ConnectionError as e:
logger.error(f"Could not connect to Ollama server.")
logger.error(
f"Make sure Ollama is running at {self.host} or install it from https://ollama.com"
)
raise ConnectionError(
f"Could not connect to Ollama server. "
f"Make sure Ollama is running at {self.host} or install it from https://ollama.com"
)

def one_shot_mode(self, user_input: str) -> str:
"""
Generate a one-shot response to the user input.

Args:
user_input: The user's input prompt

Returns:
The generated response as a string
"""
self._check_ollama_available()
messages = [
{"role": "system", "content": self.system_prompt},
{"role": "user", "content": user_input}
]

response = self._generate_completion(messages)
return response.strip()

def _generate_completion(self, messages: List[Dict[str, str]]) -> str:
"""
Generate a completion using the Ollama API.

Args:
messages: The conversation history as a list of message dictionaries

Returns:
The generated text response
"""
try:

response = ollama.chat(
model=self.model_name,
messages=messages,
options={
"temperature": self.temperature,
"num_predict": self.max_tokens,
}
)

if "message" in response and "content" in response["message"]:
return response["message"]["content"]
else:
raise ValueError(f"Unexpected response format from Ollama API: {response}")

except Exception as e:
raise ConnectionError(f"Error communicating with Ollama: {str(e)}")
Loading