Skip to content

A cross-platform desktop AI assistant powered by Ollama, featuring voice I/O, local-first privacy, and extensible plugin system.

License

Notifications You must be signed in to change notification settings

Panda-0x01/zeno_AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Zeno - Local AI Desktop Assistant

A cross-platform desktop AI assistant powered by Ollama, featuring voice I/O, local-first privacy, and extensible plugin system.

πŸŽ‰ 100% FREE & Open Source - MIT License

⚑ Optimized for Low-End PCs - Runs smoothly on 2GB RAM systems!

πŸ—οΈ Architecture

React (TypeScript) + Vite
         ↓
    Electron Shell
         ↓
   WebSocket (secure)
         ↓
  Python FastAPI Backend
         ↓
    Ollama Local API

✨ Features

  • πŸ€– Ollama Integration: Full model management, streaming responses, context window control
  • 🎀 Voice I/O: Web Speech API + optional offline STT (Whisper/VOSK), TTS with multiple voices
  • πŸ” Security First: Local-only by default, encrypted storage, sandboxed execution, audit logging, Content Security Policy (CSP)
  • 🎨 Polished UI: Light/dark themes, blurred glass morphism backgrounds, accessibility (ARIA, screen readers), keyboard shortcuts
  • πŸ”Œ Plugin System: User-defined commands with permission controls
  • πŸ“¦ Cross-Platform: Windows, macOS, Linux installers
  • ⚑ Performance Optimized: Automatic low-end device detection, reduced animations, memory management

πŸ–₯️ Low-End PC Support

This project is specifically optimized for older and low-end hardware:

  • Minimum: 2GB RAM, Dual-core CPU
  • Recommended: 4GB RAM, Quad-core CPU
  • Auto-detects device capabilities and adjusts performance
  • Smaller AI models (1B parameters) for faster responses
  • Reduced visual effects on weak hardware
  • Memory management and cleanup optimizations

See LOW_END_PC_OPTIMIZATION.md for detailed optimization guide.

πŸš€ Quick Start

⚑ One-Click Startup (Recommended)

# Windows - Double-click or run:
Zeno_AI.bat

# Or use PowerShell:
.\start_zeno.ps1

# Or use npm:
npm run start:all

πŸ”§ Manual Setup (First Time)

# 1. Install dependencies
npm run setup:complete

# 2. Start all services
npm run start:all

πŸ“‹ Individual Commands

# Start services individually
npm run start:ollama    # Start Ollama
npm run start:backend   # Start Python backend  
npm run start:frontend  # Start React frontend

# Health checks
npm run health:check    # Check all services

# Stop everything
npm run stop:all        # Stop all services

For Low-End PCs (Recommended)

# Quick setup for low-end systems
setup_low_end.bat

Prerequisites

  1. Ollama (required): Install Ollama

    # After installation, pull a model:
    ollama pull llama2
  2. Node.js 18+: Download

  3. Python 3.10+: Download

Development Setup

  1. Clone and install dependencies:

    git clone <repo-url>
    cd jarvis
    npm install
  2. Setup Python backend:

    cd backend
    python -m venv venv
    
    # Windows
    venv\Scripts\activate
    
    # macOS/Linux
    source venv/bin/activate
    
    pip install -r requirements.txt
  3. Configure environment:

    # Copy example configs
    cp backend/.env.example backend/.env
    cp frontend/.env.example frontend/.env
  4. Run in development mode:

    # Terminal 1 - Backend
    cd backend
    venv\Scripts\activate  # or source venv/bin/activate
    python main.py
    
    # Terminal 2 - Frontend + Electron
    npm run dev

Production Build

# Build all platforms (requires platform-specific tools)
npm run build:all

# Build for current platform only
npm run build

# Outputs in /dist folder

πŸ“ Project Structure

jarvis/
β”œβ”€β”€ frontend/           # React + Vite + TypeScript
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ components/ # UI components
β”‚   β”‚   β”œβ”€β”€ services/   # Backend communication
β”‚   β”‚   β”œβ”€β”€ hooks/      # React hooks
β”‚   β”‚   └── types/      # TypeScript definitions
β”‚   └── vite.config.ts
β”œβ”€β”€ backend/            # Python FastAPI
β”‚   β”œβ”€β”€ api/            # API routes
β”‚   β”œβ”€β”€ services/       # Business logic (Ollama, STT, TTS)
β”‚   β”œβ”€β”€ security/       # Auth, encryption, sandboxing
β”‚   β”œβ”€β”€ plugins/        # Plugin system
β”‚   └── main.py
β”œβ”€β”€ electron/           # Electron main process
β”‚   β”œβ”€β”€ main.js         # App lifecycle
β”‚   β”œβ”€β”€ preload.js      # Secure IPC bridge
β”‚   └── tray.js         # System tray
β”œβ”€β”€ scripts/            # Build and packaging scripts
β”œβ”€β”€ tests/              # Integration tests
└── docs/               # Additional documentation

πŸ”§ Configuration

Backend Configuration (backend/.env)

# Ollama
OLLAMA_BASE_URL=http://localhost:11434
DEFAULT_MODEL=llama2
MAX_CONTEXT_TOKENS=4096

# Server
BACKEND_HOST=127.0.0.1
BACKEND_PORT=8765
WS_SECRET_TOKEN=<auto-generated>

# Security
ENABLE_ENCRYPTION=true
AUDIT_LOG_ENABLED=true
REQUIRE_ACTION_CONFIRMATION=true

# STT/TTS
STT_ENGINE=web  # web, whisper, vosk
TTS_ENGINE=web  # web, coqui, pyttsx3
WAKE_WORD_ENABLED=false

Model Selection

Recommended models by use case:

  • Fast responses: llama2:7b, mistral:7b
  • Better quality: llama2:13b, mixtral:8x7b
  • Code assistance: codellama:13b, deepseek-coder:6.7b

🎀 Voice Features

Web Speech API (Default)

  • Works out of the box in Chromium-based Electron
  • Requires internet for some browsers
  • Push-to-talk and continuous listening modes

Offline STT (Optional)

Whisper (Recommended):

cd backend
pip install openai-whisper
# Downloads model on first use (~1.5GB for base model)

VOSK (Lightweight):

pip install vosk
# Download model: https://alphacephei.com/vosk/models
# Extract to backend/models/vosk/

Wake Word (Optional)

Porcupine:

pip install pvporcupine
# Free tier: 1 wake word, requires API key
# Set PORCUPINE_ACCESS_KEY in .env

πŸ” Security & Privacy

Threat Model

  • Local-first: All data stays on your machine by default
  • No telemetry: Zero analytics or tracking
  • Encrypted storage: Optional AES-256-GCM encryption for chat history
  • Sandboxed execution: User scripts run in restricted environment
  • Audit logging: All actions logged with timestamps

Data Storage

  • Chat history: ~/.jarvis/history.db (SQLite)
  • Audit logs: ~/.jarvis/logs/
  • Encrypted with password-derived key (PBKDF2 + AES-GCM)

Action Permissions

All system actions require explicit user confirmation:

  • Shell command execution
  • File system access
  • Network requests
  • Application launching

πŸ§ͺ Testing

# Frontend tests
cd frontend
npm test

# Backend tests
cd backend
pytest

# Integration tests
npm run test:integration

# E2E tests
npm run test:e2e

πŸ“¦ Building Installers

Windows

npm run build:win
# Outputs: dist/JARVIS-Setup-1.0.0.exe (NSIS)

macOS

npm run build:mac
# Outputs: dist/JARVIS-1.0.0.dmg
# Note: Requires code signing for notarization

Linux

npm run build:linux
# Outputs: dist/JARVIS-1.0.0.AppImage, .deb, .rpm

πŸ”Œ Plugin System

Create custom commands in ~/.jarvis/plugins/:

# example_plugin.py
from jarvis.plugin import Plugin, command

class WeatherPlugin(Plugin):
    @command(name="weather", description="Get weather info")
    async def get_weather(self, location: str):
        # Your implementation
        return f"Weather in {location}: Sunny"

Register in settings UI or plugins.json.

🎨 Keyboard Shortcuts

  • Ctrl/Cmd + Shift + J: Toggle main window
  • Ctrl/Cmd + K: Focus chat input
  • Ctrl/Cmd + ,: Open settings
  • Space (hold): Push-to-talk
  • Esc: Stop generation

πŸ› Troubleshooting

Ollama Connection Failed

# Check if Ollama is running
ollama list

# Start Ollama service
ollama serve

Python Backend Won't Start

# Check port availability
netstat -an | findstr 8765  # Windows
lsof -i :8765               # macOS/Linux

# Verify Python dependencies
pip install -r backend/requirements.txt --upgrade

Electron Build Fails

# Clear cache
npm run clean
rm -rf node_modules
npm install

πŸ“š Documentation

πŸ“„ License

MIT License - See LICENSE file

πŸ™ Third-Party Licenses

  • Ollama: MIT License
  • FastAPI: MIT License
  • React: MIT License
  • Electron: MIT License
  • Whisper (optional): MIT License
  • VOSK (optional): Apache 2.0
  • Porcupine (optional): Proprietary (free tier available)

🀝 Contributing

Contributions welcome! Please read CONTRIBUTING.md first.

⚠️ Disclaimer

This is a local-first AI assistant. While we prioritize security and privacy, users are responsible for:

  • Securing their encryption passwords
  • Reviewing plugin code before installation
  • Understanding model capabilities and limitations
  • Complying with Ollama and model licenses

About

A cross-platform desktop AI assistant powered by Ollama, featuring voice I/O, local-first privacy, and extensible plugin system.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •