Skip to content

TheUnknown550/VSCode-Local-Copilot

Local AI Copilot for VS Code

A VS Code extension that provides GitHub Copilot-like functionality using locally running Ollama models.

Now with smart project-wide context: The extension automatically gathers code and documentation from your workspace for every AI request, making suggestions and edits much more intelligent and Copilot-like.

Features

  • πŸ€– Chat Interface - Interactive AI chat panel similar to GitHub Copilot

  • πŸ’» Code Generation - Generate code from natural language descriptions

  • ✏️ Code Editing - Edit and refactor selected code with AI assistance

  • πŸ“ Code Explanation - Understand complex code with detailed explanations

  • πŸ”„ Diff Preview - See before/after comparisons before applying changes

  • 🎨 Beautiful UI - Modern interface that matches VS Code's theme

  • πŸš€ Multiple Models - Choose between different Ollama models

  • 🧠 Project-Wide Context Awareness – AI suggestions and edits use key files from your workspace (README, SETUP, main, server, extension, .py, .ts, .js, .md).

  • πŸ” Intelligent Suggestions & Edits – AI can reason about your whole project, not just the current file or cursor.

  • πŸ› οΈ Smart Code Actions – Insert, replace, and preview code with context-aware suggestions and diff previews.

Prerequisites

  1. Ollama - Install from https://ollama.ai
  2. Python 3.8+ - For the backend server
  3. Node.js 16+ - For building the extension

Supported Models

This extension is configured for:

  • gemma3:4b - Fast responses, good for quick tasks
  • gemma3:8b - Better quality, slightly slower
  • deepseek-r1:8b - Reasoning model, best for complex tasks

Installation

Step 1: Install Ollama Models

ollama pull gemma3:4b
ollama pull gemma3:8b
ollama pull deepseek-r1:8b

Step 2: Set Up Python Server

cd server
pip install -r requirements.txt
python server.py

The server will start on http://127.0.0.1:5000

Step 3: Build and Install VS Code Extension

cd vscode-extension
npm install
npm run compile

Then press F5 to launch the extension in a new VS Code window.

Usage

Chat Interface

Smart Project Context

  • The extension automatically gathers context from up to 5 key files in your workspace (README, SETUP, main, server, extension, .py, .ts, .js, .md) for every AI request.
  • It combines this with the current editor's context and selection.
  • The AI will now be able to reason about your whole project, not just the current file or cursor.

This makes code suggestions, edits, and completions much smarter and more Copilot-like!

  1. Open Command Palette (Ctrl+Shift+P or Cmd+Shift+P)
  2. Type "Local AI Copilot: Chat"
  3. Start chatting with your AI!

Quick Actions

  • Explain Code: Select code β†’ Right-click β†’ "AI: Explain Code"
  • Edit Code: Select code β†’ Right-click β†’ "AI: Edit Code"
  • Generate Code: Place cursor β†’ Right-click β†’ "AI: Generate Code"

Chat Features

  • Text Responses: Regular chat messages appear in the chat window
  • Code Responses: Code suggestions appear with action buttons:
    • Insert at Cursor: Add code at current position
    • Replace Selection: Replace selected code with AI suggestion
    • Copy: Copy code to clipboard

Model Selection

Choose your preferred model from the dropdown in the chat panel:

  • Use smaller models (4b) for faster responses
  • Use larger models (8b) for better quality and complex tasks

How It Works

  1. VS Code Extension: Provides the UI and integrates with VS Code
  2. Python Server: Handles communication with Ollama
  3. Ollama: Runs the AI models locally on your machine

Advanced:

When you ask for code generation, editing, or explanation, the extension:

  1. Collects relevant code and documentation from your workspace.
  2. Combines it with your current selection or file context.
  3. Sends this rich context to the AI backend for smarter, project-aware results.
VS Code Extension (TypeScript)
       ↓
Python Server (Flask)
       ↓
Ollama API (Port 11434)
       ↓
Local AI Models

Configuration

Changing Models

Edit server/server.py to add or modify models:

MODELS = {
    "model-name": "ollama-model-name",
    # Add more models here
}

Edit vscode-extension/src/webview.html to update the dropdown:

<option value="model-name">Display Name</option>

Changing Server Port

In server/server.py:

app.run(port=5000, debug=True)  # Change port here

In vscode-extension/src/extension.ts:

const SERVER_URL = 'http://127.0.0.1:5000';  // Update port here

Troubleshooting

Server Won't Start

  1. Check if Python dependencies are installed:

    pip install -r server/requirements.txt
  2. Check if port 5000 is available:

    # Windows
    netstat -ano | findstr :5000
    
    # Mac/Linux
    lsof -i :5000

Ollama Connection Issues

  1. Verify Ollama is running:

    ollama list
  2. Check Ollama API:

    curl http://localhost:11434/api/tags
  3. Test model:

    ollama run gemma3:4b "Hello"

Extension Not Working

  1. Check server health:

    curl http://127.0.0.1:5000/health
  2. Check browser console in VS Code:

    • Open Command Palette
    • Type "Developer: Toggle Developer Tools"
    • Check for errors
  3. Rebuild extension:

    cd vscode-extension
    npm run compile

Development

Project Structure

β”œβ”€β”€ server/
β”‚   β”œβ”€β”€ server.py           # Flask server
β”‚   └── requirements.txt    # Python dependencies
β”œβ”€β”€ vscode-extension/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ extension.ts    # Main extension code
β”‚   β”‚   β”œβ”€β”€ webview.html    # Chat UI
β”‚   β”‚   └── webview.js      # Chat logic
β”‚   β”œβ”€β”€ package.json        # Extension manifest
β”‚   └── tsconfig.json       # TypeScript config
└── README.md

Building for Production

cd vscode-extension
npm install -g vsce
vsce package

This creates a .vsix file you can install with:

code --install-extension local-ai-copilot-*.vsix

Contributing

Feel free to submit issues and pull requests!

License

MIT License

Credits

Built with: