A cross-platform desktop AI assistant powered by Ollama, featuring voice I/O, local-first privacy, and extensible plugin system.
π 100% FREE & Open Source - MIT License
β‘ Optimized for Low-End PCs - Runs smoothly on 2GB RAM systems!
React (TypeScript) + Vite
β
Electron Shell
β
WebSocket (secure)
β
Python FastAPI Backend
β
Ollama Local API
- π€ Ollama Integration: Full model management, streaming responses, context window control
- π€ Voice I/O: Web Speech API + optional offline STT (Whisper/VOSK), TTS with multiple voices
- π Security First: Local-only by default, encrypted storage, sandboxed execution, audit logging, Content Security Policy (CSP)
- π¨ Polished UI: Light/dark themes, blurred glass morphism backgrounds, accessibility (ARIA, screen readers), keyboard shortcuts
- π Plugin System: User-defined commands with permission controls
- π¦ Cross-Platform: Windows, macOS, Linux installers
- β‘ Performance Optimized: Automatic low-end device detection, reduced animations, memory management
This project is specifically optimized for older and low-end hardware:
- Minimum: 2GB RAM, Dual-core CPU
- Recommended: 4GB RAM, Quad-core CPU
- Auto-detects device capabilities and adjusts performance
- Smaller AI models (1B parameters) for faster responses
- Reduced visual effects on weak hardware
- Memory management and cleanup optimizations
See LOW_END_PC_OPTIMIZATION.md for detailed optimization guide.
# Windows - Double-click or run:
Zeno_AI.bat
# Or use PowerShell:
.\start_zeno.ps1
# Or use npm:
npm run start:all# 1. Install dependencies
npm run setup:complete
# 2. Start all services
npm run start:all# Start services individually
npm run start:ollama # Start Ollama
npm run start:backend # Start Python backend
npm run start:frontend # Start React frontend
# Health checks
npm run health:check # Check all services
# Stop everything
npm run stop:all # Stop all services# Quick setup for low-end systems
setup_low_end.bat-
Ollama (required): Install Ollama
# After installation, pull a model: ollama pull llama2 -
Node.js 18+: Download
-
Python 3.10+: Download
-
Clone and install dependencies:
git clone <repo-url> cd jarvis npm install
-
Setup Python backend:
cd backend python -m venv venv # Windows venv\Scripts\activate # macOS/Linux source venv/bin/activate pip install -r requirements.txt
-
Configure environment:
# Copy example configs cp backend/.env.example backend/.env cp frontend/.env.example frontend/.env -
Run in development mode:
# Terminal 1 - Backend cd backend venv\Scripts\activate # or source venv/bin/activate python main.py # Terminal 2 - Frontend + Electron npm run dev
# Build all platforms (requires platform-specific tools)
npm run build:all
# Build for current platform only
npm run build
# Outputs in /dist folderjarvis/
βββ frontend/ # React + Vite + TypeScript
β βββ src/
β β βββ components/ # UI components
β β βββ services/ # Backend communication
β β βββ hooks/ # React hooks
β β βββ types/ # TypeScript definitions
β βββ vite.config.ts
βββ backend/ # Python FastAPI
β βββ api/ # API routes
β βββ services/ # Business logic (Ollama, STT, TTS)
β βββ security/ # Auth, encryption, sandboxing
β βββ plugins/ # Plugin system
β βββ main.py
βββ electron/ # Electron main process
β βββ main.js # App lifecycle
β βββ preload.js # Secure IPC bridge
β βββ tray.js # System tray
βββ scripts/ # Build and packaging scripts
βββ tests/ # Integration tests
βββ docs/ # Additional documentation
# Ollama
OLLAMA_BASE_URL=http://localhost:11434
DEFAULT_MODEL=llama2
MAX_CONTEXT_TOKENS=4096
# Server
BACKEND_HOST=127.0.0.1
BACKEND_PORT=8765
WS_SECRET_TOKEN=<auto-generated>
# Security
ENABLE_ENCRYPTION=true
AUDIT_LOG_ENABLED=true
REQUIRE_ACTION_CONFIRMATION=true
# STT/TTS
STT_ENGINE=web # web, whisper, vosk
TTS_ENGINE=web # web, coqui, pyttsx3
WAKE_WORD_ENABLED=falseRecommended models by use case:
- Fast responses:
llama2:7b,mistral:7b - Better quality:
llama2:13b,mixtral:8x7b - Code assistance:
codellama:13b,deepseek-coder:6.7b
- Works out of the box in Chromium-based Electron
- Requires internet for some browsers
- Push-to-talk and continuous listening modes
Whisper (Recommended):
cd backend
pip install openai-whisper
# Downloads model on first use (~1.5GB for base model)VOSK (Lightweight):
pip install vosk
# Download model: https://alphacephei.com/vosk/models
# Extract to backend/models/vosk/Porcupine:
pip install pvporcupine
# Free tier: 1 wake word, requires API key
# Set PORCUPINE_ACCESS_KEY in .env- Local-first: All data stays on your machine by default
- No telemetry: Zero analytics or tracking
- Encrypted storage: Optional AES-256-GCM encryption for chat history
- Sandboxed execution: User scripts run in restricted environment
- Audit logging: All actions logged with timestamps
- Chat history:
~/.jarvis/history.db(SQLite) - Audit logs:
~/.jarvis/logs/ - Encrypted with password-derived key (PBKDF2 + AES-GCM)
All system actions require explicit user confirmation:
- Shell command execution
- File system access
- Network requests
- Application launching
# Frontend tests
cd frontend
npm test
# Backend tests
cd backend
pytest
# Integration tests
npm run test:integration
# E2E tests
npm run test:e2enpm run build:win
# Outputs: dist/JARVIS-Setup-1.0.0.exe (NSIS)npm run build:mac
# Outputs: dist/JARVIS-1.0.0.dmg
# Note: Requires code signing for notarizationnpm run build:linux
# Outputs: dist/JARVIS-1.0.0.AppImage, .deb, .rpmCreate custom commands in ~/.jarvis/plugins/:
# example_plugin.py
from jarvis.plugin import Plugin, command
class WeatherPlugin(Plugin):
@command(name="weather", description="Get weather info")
async def get_weather(self, location: str):
# Your implementation
return f"Weather in {location}: Sunny"Register in settings UI or plugins.json.
Ctrl/Cmd + Shift + J: Toggle main windowCtrl/Cmd + K: Focus chat inputCtrl/Cmd + ,: Open settingsSpace(hold): Push-to-talkEsc: Stop generation
# Check if Ollama is running
ollama list
# Start Ollama service
ollama serve# Check port availability
netstat -an | findstr 8765 # Windows
lsof -i :8765 # macOS/Linux
# Verify Python dependencies
pip install -r backend/requirements.txt --upgrade# Clear cache
npm run clean
rm -rf node_modules
npm install- Architecture Deep Dive
- Security Whitepaper
- Plugin Development Guide
- API Reference
- Contributing Guidelines
MIT License - See LICENSE file
- Ollama: MIT License
- FastAPI: MIT License
- React: MIT License
- Electron: MIT License
- Whisper (optional): MIT License
- VOSK (optional): Apache 2.0
- Porcupine (optional): Proprietary (free tier available)
Contributions welcome! Please read CONTRIBUTING.md first.
This is a local-first AI assistant. While we prioritize security and privacy, users are responsible for:
- Securing their encryption passwords
- Reviewing plugin code before installation
- Understanding model capabilities and limitations
- Complying with Ollama and model licenses