Ask questions about your local folders and internet sources using local LLMs with RAG-powered insights.
Charisma Studio is a desktop client that transforms how you interact with local directories (codebases, research papers, projects) and web sources by combining LLMs like Ollama with Retrieval-Augmented Generation (RAG) and a vector database search. Select any folder, and/or any sources from the web and get AI-powered context about its contents through an intuitive chat interface.
- Local LLM Integration - Direct connection to Ollama instances
- RAG-Powered Insights - Database vector search for precise context
- Multi-Format Support - Process code, text, Markdown, and PDFs
- Real-Time Streaming - Typewriter-style response delivery
- Electron Packaging - Cross-platform desktop app
- TypeSafe Architecture - Built with React + TypeScript
| Component | Choice | Why? |
|---|---|---|
| Frontend | React + TypeScript | |
| Styling | Tailwind CSS + Headless UI | |
| LLM Core | Ollama + HNSWLib | LangChain |
| Packaging | Electron + Vite | Cross-platform binaries |
- Node.js 18+ (download)
# 1. Clone repository
git clone https://github.com/JoshDiDuca/charisma-ai.git
cd charisma-ai
# 2. Install dependencies
pnpm install
# 3. Start dev application
pnpm devNote: Check Electron Builder docs for more knowledge
pnpm buildpnpm build --mac
# OR
pnpm build --win
# OR
pnpm build --linuxThe built apps will be available on the release folder.
- Multiple conversations with storage
- TTS
- Improved RAG data sources
- AI response sources
- Web RAG queries
- Switch to hnswlib from ChromaDB (licensing and more flexible)
- Automatically download Ollama and dependencies
- Full download for models instead of Ollama pulling
- Message attachments
- Flexible UI
- Improved handling of Ollama being installed and exe files not present in release.
- Fix large pdf files/text files not embedding
- Web RAG query directly reading file urls like .pdf etc
- Next prompt suggestions
- Reasoning model "thinking" support
- Hugging face
- Ability to add custom models
- Relational database queries
- Revisit TTS and fix that
- Translate results
- LlamaIndex full integration
- JSON "Tools/Agents"
- Settings
- Dark theme
- Ignore Paths, Files, Source Settings
- Coding mode setting
- TTS voice model selection
- Source viewing/editing along with the ability to read using TTS
- CSV file support
- Agents/Pipelines/Custom Reasoning
- Agent tasks such as creating, editing, writing files
- Easy access to run any AI locally
- AI Pair Programmer - Refactor code via chat
- Multi Source Data Analysis - Images/PDFs/Web Sources/Database support
- AI Collaboration - Shared session histories
We welcome contributions! Please follow our contribution guidelines:
- Fork the repository
- Create feature branch (
git checkout -b feat/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push branch (
git push origin feat/amazing-feature) - Open Pull Request
*Ollama (MIT), Piper (MIT) and hnswlib/hnswlib-node (Apache) have their own licenses - please review separately.

