Run Hugging Face models locally in VS Code! Code with AI assistance completely offline using runtime.
NoaxAI brings the power of large language models directly into your editor - no cloud services required. Download models once, then work completely offline with AI-powered coding assistance.
- π Privacy First: All AI operations run locally on your machine
- π¨ Fast & Efficient: Uses optimized models for quick responses
- π Offline Ready: No internet needed after initial model download
- π― Code Focused: Built specifically for programming workflows
- π° Free Forever: No subscriptions or API costs
- Explore and manage AI models through the "Explore Models" panel
- Search models by name, size, or description
- Models are sorted by size for easy selection
- Visual indicators for installed and selected models
- Real-time download progress with progress bar
- Models range from 4K to 3B parameters
- Switch between different models instantly
- Interactive chat interface with AI assistant
- Multi-file context support - attach multiple files to your questions
- Stream responses in real-time
- Export generated code directly to files
- Quick action suggestions and code examples
- Organized welcome screen with common tasks
- Stop generation at any time
- Visual file attachment system
- Rich markdown and code block support
- Refactor Code: Improve code quality and efficiency
- Generate Unit Tests: Automatically create tests for your code
- Fix Bugs: Get suggestions for bug fixes
- Document Code: Generate comprehensive documentation
- Custom Operations: Use inline editing for custom code transformations
- Right-Click Menu: Quick access to NoaxAI features
- Context-Aware: Language-specific suggestions
- Generate comprehensive documentation for entire folders
- Create user guides and technical documentation
- Generate ASCII diagrams for code structure
- Export to markdown format
- Support for mkdocs configuration
- Maintains folder structure in output
- Customizable site name and description
- AI-powered command-line helper
- Get suggestions for terminal commands
- Execute suggested commands directly
- Interactive command selection
- Real-time command output display
- Multiple command suggestions
- Install NoaxAI extension from VS Code marketplace
- Open VS Code
- Open the "Explore Models" panel in the sidebar
- Browse or search for a model that fits your needs
- Click on a model to install it locally
- Wait for the model to download (progress bar will show status)
- The model will be automatically selected after installation
- Open the "Explore Models" panel
- Use the search icon to find models by name, size, or description
- Click on the cloud icon to install a model
- Installed models show a checkmark
- Selected model shows "Selected β" indicator
- Models are sorted by size for easy selection
- Click the chat icon in the status bar or use command palette
- Use π to attach relevant workspace files
- Toggle "Export" to save generated code to files
- Real-time streaming responses
- Stop generation at any time
- Organized suggestions for common tasks
- Visual file context display
Select code and right-click to access:
NoaxAI: Refactor CodeNoaxAI: Generate Unit TestNoaxAI: Fix BugNoaxAI: Edit Inline
- Right-click on a folder in the explorer
- Select
NoaxAI: Document This Folder - Enter site name and description
- Documentation will be generated in markdown format
- Open command palette
- Type
NoaxAI: REPL - Ask for command suggestions
- Select and execute suggested commands
Ctrl+N: Apply last AI suggestion when text is selected
- VS Code ^1.94.0
- Internet connection for initial model download
- Sufficient disk space for local models
This extension contributes the following settings:
NoaxAI.cacheDirPath: Path to store downloaded modelsNoaxAI.selectedModel: Currently selected model for operations
- Initial model download may take time depending on your internet connection
- Model switching may have a brief delay
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
- Models provided by Hugging Face
- Runtime for model execution
- Various open-source contributors
| Model | Size | Speed | Memory Usage |
|---|---|---|---|
| SmolLM2-135M | 135MB | Fast | ~300MB RAM |
| Llama-3.2-1B | 1.1GB | Standard | ~2GB RAM |
Need a specific model? We're constantly expanding our model support!
- Check if the model is compatible
- Create a Model Request on GitHub
- Provide the following information:
- Model name and Hugging Face link
- Use case/scenario
- Expected benefits
Current model priorities:
- π Small & fast models (< 500MB)
- π» Code-specialized models
- π§ Task-specific models
- Star the repo to show support! β
