Skip to content

Commit 749261f

Browse files
removed dead code from openai_cl, README updated
1 parent 713cdc4 commit 749261f

File tree

3 files changed

+87
-80
lines changed

3 files changed

+87
-80
lines changed

README.md

Lines changed: 87 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -1,79 +1,127 @@
11
![Demo](./demo.png)
22

3-
# ⚡ CodeRunner: A sandbox running on apple container for code execution
4-
5-
CodeRunner provides a MCP server running inside a secure sandbox, powered by Apple's native [container](https://github.com/apple/container) technology, allowing you to safely execute code generated by local/remote AI models like Ollama.
6-
7-
This guide is for **using** the pre-built CodeRunner sandbox.
3+
# ⚡ CodeRunner: Secure Code Execution Sandbox
84

5+
CodeRunner provides a secure MCP (Model Context Protocol) server that runs inside a sandboxed environment, powered by Apple's native [container](https://github.com/apple/container) technology. It allows you to safely execute code generated by AI models like Claude, OpenAI GPT, or Ollama.
96

7+
This guide shows you how to use the pre-built CodeRunner sandbox.
108

119
## 🚀 Quick Start
1210

1311
### Prerequisites
1412

15-
* A Mac with Apple Silicon (M1/M2/M3/M4 series).
16-
* **[Apple `container` Tool](https://github.com/apple/container)** installed via **[Download](https://github.com/apple/container/releases/download/0.1.0/container-0.1.0-installer-signed.pkg)**
17-
* **Python 3.10+**
13+
- Mac with Apple Silicon (M1/M2/M3/M4 series)
14+
- **[Apple `container` Tool](https://github.com/apple/container)** - [Download installer](https://github.com/apple/container/releases/download/0.1.0/container-0.1.0-installer-signed.pkg)
15+
- **Python 3.10+**
1816

19-
### Step 1: Configure Local Network for Sandbox
17+
### Step 1: Set Up Local Network
2018

21-
22-
Run these commands once to set up the `.local` top-level domain:
19+
Run these commands once to configure the `.local` domain:
2320

2421
```bash
2522
sudo container system dns create local
2623
container system dns default set local
2724
```
2825

29-
### Step 2: Run the Pre-Built Sandbox Container
26+
### Step 2: Start the Sandbox Container
3027

31-
This single command will download the CodeRunner sandbox image from Docker Hub (if not already present) and run it.
28+
This command downloads and runs the CodeRunner sandbox from Docker Hub:
3229

3330
```bash
34-
# This will run the container named 'coderunner' and make it
35-
# available at http://coderunner.local:8222
31+
# Run the container and make it available at http://coderunner.local:8222
3632
container run \
3733
--name coderunner \
3834
--detach --rm \
3935
instavm/coderunner
4036
```
4137

42-
43-
### Step 3: Run an AI Task
44-
45-
Finally, run the script from your terminal:
38+
### Step 3: Clone the Repository
4639

4740
```bash
4841
git clone https://github.com/BandarLabs/coderunner.git
4942
cd coderunner
50-
51-
# Configure Claude Desktop MCP integration (optional)
52-
cp claude_mcp_proxy/claude_desktop_config.example.json claude_mcp_proxy/claude_desktop_config.json
53-
# Edit the config file with your local paths
5443
```
5544

56-
Now you can give it prompts like `write python code to generate 100 primes` and watch it execute the code safely in the sandbox!
45+
## 🔌 Integration Options
5746

58-
### Use with Ollama3.1 with MCPHOST
47+
### Option 1: Claude Desktop Integration
5948

60-
Download `mcphost` from [releases](https://github.com/mark3labs/mcphost/releases/tag/v0.14.0)
49+
Configure Claude Desktop to use CodeRunner as an MCP server:
6150

62-
```bash
63-
cp examples/.mcp.json ~/.mcp.json
64-
~/Downloads/mcphost_Darwin_arm64/mcphost -m ollama:llama3.1:8b
65-
```
51+
1. **Copy the example configuration:**
52+
```bash
53+
cp claude_mcp_proxy/claude_desktop_config.example.json claude_mcp_proxy/claude_desktop_config.json
54+
```
6655

67-
### Can also run via python openai agents
56+
2. **Edit the configuration file** and replace the placeholder paths:
57+
- Replace `/path/to/your/python` with your actual Python path (e.g., `/usr/bin/python3` or `/opt/homebrew/bin/python3`)
58+
- Replace `/path/to/coderunner` with the actual path to your cloned repository
59+
60+
Example after editing:
61+
```json
62+
{
63+
"mcpServers": {
64+
"coderunner": {
65+
"command": "/opt/homebrew/bin/python3",
66+
"args": ["/Users/yourname/coderunner/claude_mcp_proxy/mcp.py"]
67+
}
68+
}
69+
}
70+
```
6871

69-
```bash
70-
python examples/openai_client.py
71-
```
72+
3. **Update Claude Desktop configuration:**
73+
- Open Claude Desktop
74+
- Go to Settings → Developer
75+
- Add the MCP server configuration
76+
- Restart Claude Desktop
7277

73-
### Use via Curl
78+
4. **Start using CodeRunner in Claude:**
79+
You can now ask Claude to execute code, and it will run safely in the sandbox!
7480

75-
```
76-
curl -X POST "http://coderunner.local:8222/execute/" -H "Content-Type: application/json" -d '{"command": "print(100**100)"}'
81+
### Option 2: Python OpenAI Agents
7782

78-
{"result":"100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\n"}
79-
```
83+
Use CodeRunner with OpenAI's Python agents library:
84+
85+
1. **Set your OpenAI API key:**
86+
```bash
87+
export OPENAI_API_KEY="your-openai-api-key-here"
88+
```
89+
90+
2. **Install required dependencies:**
91+
```bash
92+
pip install openai-agents
93+
```
94+
95+
3. **Run the client:**
96+
```bash
97+
python examples/openai_client.py
98+
```
99+
100+
4. **Start coding:**
101+
Enter prompts like "write python code to generate 100 prime numbers" and watch it execute safely in the sandbox!
102+
103+
## 🛡️ Security Features
104+
105+
### Sandboxed Execution
106+
All code runs in an isolated container environment, ensuring your host system remains secure.
107+
108+
## 🏗️ Architecture
109+
110+
CodeRunner consists of:
111+
- **Sandbox Container:** Isolated execution environment with Jupyter kernel
112+
- **MCP Server:** Handles communication between AI models and the sandbox
113+
- **Proxy Layer:** Manages connections and security
114+
115+
## 📚 Examples
116+
117+
The `examples/` directory contains:
118+
- `openai_client.py` - Example OpenAI agents integration
119+
- `systemprompt.txt` - Sample system prompt for AI models
120+
121+
## 🤝 Contributing
122+
123+
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
124+
125+
## 📄 License
126+
127+
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

examples/.mcp.json

Lines changed: 0 additions & 15 deletions
This file was deleted.

examples/openai_client.py

Lines changed: 0 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -11,32 +11,6 @@
1111
from agents.model_settings import ModelSettings
1212

1313

14-
# async def run(mcp_server: MCPServer):
15-
# agent = Agent(
16-
# name="Assistant",
17-
# instructions="Use the tools to answer the questions.",
18-
# mcp_servers=[mcp_server],
19-
# model_settings=ModelSettings(tool_choice="required"),
20-
# )
21-
22-
# # Use the `add` tool to add two numbers
23-
# message = "list files in current directory using python"
24-
# print(f"Running: {message}")
25-
# result = await Runner.run(starting_agent=agent, input=message)
26-
# print(result.final_output)
27-
28-
# # Run the `get_weather` tool
29-
# message = "Fetch ETH price on 15th June 2025. First pip install libraries needed like yfinance, then write the code and fetch data."
30-
# print(f"\n\nRunning: {message}")
31-
# result = await Runner.run(starting_agent=agent, input=message)
32-
# print(result.final_output)
33-
34-
# # Run the `get_secret_word` tool
35-
# message = "What's the secret word?"
36-
# print(f"\n\nRunning: {message}")
37-
# result = await Runner.run(starting_agent=agent, input=message)
38-
# print(result.final_output)
39-
4014

4115
async def run(mcp_server: MCPServer):
4216
agent = Agent(

0 commit comments

Comments
 (0)