

Privacy-First AI Assistant & Agent Builder
Chat with AI, create intelligent agents, and turn them into fully functional appsβpowered entirely by open-source models running on your own device.
- On-Premise Execution: All AI models and automation workflows run entirely on your infrastructure
- Zero Data Leakage: Your business data never leaves your network. No cloud dependencies, no external APIs
- Complete Control: Built on open-source technology stack, giving you full control over your automation infrastructure
Leverage embedded N8N workflow engine to create sophisticated business processes with AI integration:
Design intelligent business processes that combine N8N workflows with custom AI agents - all from a single interface:
Design custom AI agents with a node-based editor, then convert them into standalone business applications:
Chat with any Ollama-compatible model, including multimodal models that understand images:
Create amazing images from text prompts using Stable Diffusion models with ComfyUI integration:
Browse, search, and manage all generated images in one convenient gallery:
- Download .dmg installer
- Universal binary (works on both Intel and Apple Silicon)
- Fully signed and notarized for enhanced security
- Download .AppImage
- Runs on most Linux distributions
- No installation required
- We recommend using the Docker version for best performance and security
- If you need the native app: Download .exe installer
- I dont have money for signing it π’
- Try Clara Online
- Requires local Ollama installation - limited to just chat and needed remote ollama config
- Install Ollama (Required for all versions except Docker) Download from Ollama's website
- Connect
Default Ollama endpoint:
http://localhost:11434
For faster performance and offline convenience, download the native desktop version:
If you see a message that the app is damaged or can't be opened:
- Right-click (or Control+click) on the app in Finder
- Select "Open" from the context menu
- Click "Open" on the security dialog
- If still blocked, go to System Preferences > Security & Privacy > General and click "Open Anyway"
This happens because the app is not notarized with Apple. This is perfectly safe, but macOS requires this extra step for unsigned applications.
Building for macOS:
- Development build (no notarization):
npm run electron:build-mac-dev
- Production build (with notarization, requires Apple Developer Program):
- Set environment variables
APPLE_ID
,APPLE_ID_PASSWORD
(app-specific password), andAPPLE_TEAM_ID
- Run
npm run electron:build-mac
- Set environment variables
To get an Apple Team ID, join the Apple Developer Program.
# Clone the repository
git clone https://github.com/badboysm890/ClaraVerse.git
cd clara-ollama
# Install dependencies
npm install
# Start development server (web)
npm run dev
# Start development server (desktop)
npm run electron:dev
If Ollama runs on another machine:
- Enable CORS in Ollama (
~/.ollama/config.json
):{ "origins": ["*"] }
- In Clara settings, specify:
http://{IP_ADDRESS}:11434
# Build web version
npm run build
# Build desktop app
npm run electron:build
Have questions or need help? Reach out via praveensm890@gmail.com.
Before using Clara's N8N integration, ensure you have the following installed:
macOS & Linux:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
Then add to your shell configuration (~/.bash_profile, ~/.zshrc, ~/.bashrc):
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
Windows:
- Download and install nvm-windows
- Run installer as administrator
- Restart terminal after installation
# Install latest LTS version
nvm install --lts
# Use the installed version
nvm use --lts
npm install n8n -g
Clara automatically manages N8N processes, but you can also run it manually:
# Start N8N
n8n start
# Start in tunnel mode (for remote access)
n8n start --tunnel
-
N8N Doesn't Start
# Check if port 5678 is in use lsof -i :5678 # macOS/Linux netstat -ano | findstr :5678 # Windows # Kill existing process if needed kill -9 <PID> # macOS/Linux taskkill /PID <PID> /F # Windows
-
Permission Issues
# macOS/Linux sudo chown -R $USER ~/.n8n # Windows (Run PowerShell as Administrator) takeown /F "%USERPROFILE%\.n8n" /R
-
Database Errors
# Clear N8N cache rm -rf ~/.n8n/database.sqlite # macOS/Linux del "%USERPROFILE%\.n8n\database.sqlite" # Windows
-
Node Version Conflicts
# Ensure correct Node version nvm install 16 nvm use 16 npm install n8n -g
macOS:
- If installation fails, ensure Xcode Command Line Tools are installed:
xcode-select --install
- For M1/M2 Macs, you might need Rosetta:
softwareupdate --install-rosetta
Linux:
- Ensure build essentials are installed:
sudo apt-get update sudo apt-get install -y build-essential
- For Ubuntu/Debian, you might need additional dependencies:
sudo apt-get install -y python3 make gcc g++
Windows:
- Run PowerShell as Administrator when installing global packages
- Ensure Windows Build Tools are installed:
npm install --global windows-build-tools
- If you encounter path issues:
- Check System Environment Variables
- Ensure Node and npm paths are correctly set
- Restart PowerShell/CMD after path changes
# Check N8N version
n8n --version
# Check if N8N service is running
curl http://localhost:5678 # macOS/Linux
Invoke-WebRequest -Uri http://localhost:5678 # Windows