Skip to content

Commit ea1822b

Browse files
authored
Merge branch 'master' into settings
2 parents 2b1fecd + 04dd753 commit ea1822b

File tree

18 files changed

+1589
-211
lines changed

18 files changed

+1589
-211
lines changed

OLLAMA_SETUP.md

Lines changed: 244 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,244 @@
1+
# Ollama Integration with Remix IDE
2+
3+
This guide explains how to set up and use Ollama with Remix IDE for local AI-powered code completion and assistance. Note the restrictions listed below.
4+
5+
## Table of Contents
6+
- [What is Ollama?](#what-is-ollama)
7+
- [Installation](#installation)
8+
- [CORS Configuration](#cors-configuration)
9+
- [Model Download and Management](#model-download-and-management)
10+
- [Recommended Models](#recommended-models)
11+
- [Using Ollama in Remix IDE](#using-ollama-in-remix-ide)
12+
- [Troubleshooting](#troubleshooting)
13+
- [Advanced Configuration](#advanced-configuration)
14+
15+
## What is Ollama?
16+
17+
Ollama is a local AI model runner that allows you to run large language models on your own machine. With Remix IDE's Ollama integration, you get:
18+
19+
- **Privacy**: All processing happens locally on your machine
20+
- **No API rate throttling**: No usage fees or rate limits
21+
- **Offline capability**: Works without internet connection
22+
- **Code-optimized models**: Specialized models for coding tasks
23+
- **Fill-in-Middle (FIM) support**: Advanced code completion capabilities
24+
25+
## Model compatible with the Remix IDE
26+
The folowing is a list of model compatible with the Remix IDE (both desktop and web). The models have been tested to provide acceptable results on mid-tier consumer GPUs. As operating Ollama independently, the user should understand the model performance criteria and their hardware specifications.
27+
28+
- **codestral:latest**
29+
- **quen3-coder:latest**
30+
- **gpt-oss:latest**
31+
- **deepseek-coder-v2:latest** Great for code completion
32+
33+
## Restrictions
34+
The current integration does not allow agentic workflows. We strongly recommend running Ollama with hardware acceleration (e.g. GPUs) for best experience. The following features are not enabled when using Ollama, please fallback to remote providers.
35+
- **Contract generation**
36+
- **Workspace Edits**
37+
38+
## Installation
39+
40+
### Step 1: Install Ollama
41+
42+
**macOS:**
43+
```bash
44+
curl -fsSL https://ollama.ai/install.sh | sh
45+
```
46+
47+
**Windows:**
48+
Download the installer from [ollama.ai](https://ollama.ai/download/windows)
49+
50+
**Linux:**
51+
```bash
52+
curl -fsSL https://ollama.ai/install.sh | sh
53+
```
54+
55+
### Step 2: Start Ollama Service
56+
57+
After installation, start the Ollama service:
58+
59+
```bash
60+
ollama serve
61+
```
62+
63+
The service will run on `http://localhost:11434` by default.
64+
65+
## CORS Configuration
66+
67+
To allow Remix IDE to communicate with Ollama, you need to configure CORS settings.
68+
See [Ollama Cors Settings](https://objectgraph.com/blog/ollama-cors/).
69+
## Model Download and Management
70+
71+
### Downloading Models
72+
73+
Use the `ollama pull` command to download models:
74+
75+
```bash
76+
# Download a specific model
77+
ollama pull qwen2.5-coder:14b
78+
79+
# Download the latest version
80+
ollama pull codestral:latest
81+
82+
83+
```
84+
85+
### Managing Models
86+
87+
```bash
88+
# List installed models
89+
ollama list
90+
91+
# Remove a model
92+
ollama rm model-name
93+
94+
# Show model information
95+
ollama show codestral:latest <--template>
96+
97+
# Update a model
98+
ollama pull codestral:latest
99+
```
100+
101+
### Model Storage Locations
102+
103+
Models are stored locally in:
104+
- **macOS:** `~/.ollama/models`
105+
- **Linux:** `~/.ollama/models`
106+
- **Windows:** `%USERPROFILE%\.ollama\models`
107+
108+
## Recommended Models
109+
110+
### For Code Completion (Fill-in-Middle Support)
111+
112+
These models support advanced code completion with context awareness, code explanation, debugging help, and general questions:
113+
114+
#### **Codestral (Excellent for Code)**
115+
```bash
116+
ollama pull codestral:latest # ~22GB, state-of-the-art code model
117+
```
118+
119+
#### **Quen Coder**
120+
```bash
121+
ollama pull qwen3-coder:latest
122+
```
123+
124+
#### **GPT-OSS**
125+
```bash
126+
ollama pull gpt-oss:latest
127+
```
128+
129+
#### **Code Gemma**
130+
```bash
131+
ollama pull codegemma:7b # ~5GB, Google's code model
132+
ollama pull codegemma:2b # ~2GB, lightweight option
133+
```
134+
135+
### Model Size and Performance Guide
136+
137+
| Model Size | RAM Required | Speed | Quality | Use Case |
138+
|------------|--------------|-------|---------|----------|
139+
| 2B-3B | 4GB+ | Fast | Good | Quick completions, low-end hardware |
140+
| 7B-8B | 8GB+ | Medium| High | **Recommended for most users** |
141+
| 13B-15B | 16GB+ | Slower| Higher | Development workstations |
142+
| 30B+ | 32GB+ | Slow | Highest | High-end workstations only |
143+
144+
## Using Ollama in Remix IDE
145+
146+
### Step 1: Verify Ollama is Running
147+
148+
Ensure Ollama is running and accessible:
149+
```bash
150+
curl http://localhost:11434/api/tags
151+
```
152+
153+
### Step 2: Select Ollama in Remix IDE
154+
155+
1. Open Remix IDE
156+
2. Navigate to the AI Assistant panel
157+
3. Click the provider selector (shows current provider like "MistralAI")
158+
4. Select "Ollama" from the dropdown
159+
5. Wait for the connection to establish
160+
161+
### Step 3: Choose Your Model
162+
163+
1. After selecting Ollama, a model dropdown will appear
164+
2. Select your preferred model from the list
165+
3. The selection will be saved for future sessions
166+
167+
### Step 4: Start Using AI Features
168+
169+
- **Code Completion**: Type code and get intelligent completions
170+
- **Code Explanation**: Ask questions about your code
171+
- **Error Help**: Get assistance with debugging
172+
- **Code Generation**: Generate code from natural language descriptions
173+
174+
## Troubleshooting
175+
176+
### Common Issues
177+
178+
#### **"Ollama is not available" Error**
179+
180+
1. Check if Ollama is running:
181+
```bash
182+
curl http://localhost:11434/api/tags
183+
```
184+
185+
2. Verify CORS configuration:
186+
```bash
187+
curl -H "Origin: https://remix.ethereum.org" http://localhost:11434/api/tags
188+
```
189+
190+
3. Check if models are installed:
191+
```bash
192+
ollama list
193+
```
194+
195+
#### **No Models Available**
196+
197+
Download at least one model:
198+
```bash
199+
ollama pull codestral:latest
200+
```
201+
202+
#### **Connection Refused**
203+
204+
1. Start Ollama service:
205+
```bash
206+
ollama serve
207+
```
208+
209+
2. Check if running on correct port:
210+
```bash
211+
netstat -an | grep 11434
212+
```
213+
214+
#### **Model Loading Slow**
215+
216+
- Close other applications to free up RAM
217+
- Use smaller models (7B instead of 13B+)
218+
- Ensure sufficient disk space
219+
220+
#### **CORS Errors in Browser Console**
221+
222+
1. Verify `OLLAMA_ORIGINS` is set correctly
223+
2. Restart Ollama after changing CORS settings
224+
3. Clear browser cache and reload Remix IDE
225+
226+
### Performance Optimization
227+
228+
#### **Hardware Recommendations**
229+
230+
- **Minimum**: 8GB RAM, integrated GPU
231+
- **Recommended**: 16GB RAM, dedicated GPU with 8GB+ VRAM
232+
- **Optimal**: 32GB RAM, RTX 4090 or similar
233+
234+
235+
## Getting Help
236+
237+
- **Ollama Documentation**: [https://ollama.ai/docs](https://ollama.ai/docs)
238+
- **Remix IDE Documentation**: [https://remix-ide.readthedocs.io](https://remix-ide.readthedocs.io)
239+
- **Community Support**: Remix IDE Discord/GitHub Issues
240+
- **Model Hub**: [https://ollama.ai/library](https://ollama.ai/library)
241+
242+
---
243+
244+
**Note**: This integration provides local AI capabilities for enhanced privacy and performance. Model quality and speed depend on your hardware specifications and chosen models.

0 commit comments

Comments
 (0)