Chat with your files using AI — completely offline, 100% private.
Features • Screenshots • Installation • Usage • Architecture • Contributing • License
Mnemora is a desktop application that lets you have natural conversations with your local documents — PDFs, Markdown notes, code files, and more. Powered entirely by local AI models through Ollama, your data never leaves your computer.
| Feature | Description |
|---|---|
| 🔒 100% Private | All processing happens locally. No cloud, no data sharing, no subscriptions |
| ⚡ Semantic Search | Find information across thousands of documents using natural language |
| 📚 Source Citations | Every answer shows exactly which files it came from with relevance scores |
| 🎨 Beautiful UI | Modern, dark-themed interface built with React and Tailwind CSS |
| 🧩 Extensible | Open source, well-documented, and easy to customize |
- 📁 Index Any Folder — Add folders containing your documents and let Mnemora understand them
- 💬 Natural Chat Interface — Ask questions in plain English and get intelligent answers
- 🔗 Smart Citations — See exactly which documents contributed to each answer
- 📊 Relevance Scoring — Visual indicators show how relevant each source is
- ⚡ Streaming Responses — See AI responses as they're generated in real-time
| Type | Extensions | Features |
|---|---|---|
| Markdown | .md, .markdown |
Obsidian [[wiki-links]], #tags, YAML frontmatter |
.pdf |
Full text extraction with page markers | |
| Plain Text | .txt |
Direct content indexing |
| Code | .py, .js, .ts, .go, .rs, etc. |
Language detection, structure extraction |
- Ollama — Local LLM inference with models like Llama 3.2, Mistral, CodeLlama
- ChromaDB — High-performance vector database for semantic search
- FastAPI — Modern Python backend with async support
- Electron — Cross-platform desktop application
- React — Component-based UI with Zustand for state management
Ask natural language questions about your documents.
Get detailed answers with information extracted from your files.
See exactly which files contributed to each answer with relevance scores.
| Requirement | Version | Download |
|---|---|---|
| Node.js | 18+ | nodejs.org |
| Python | 3.10+ | python.org |
| Ollama | Latest | ollama.ai |
# 1. Clone the repository
git clone https://github.com/AIkaptan/Mnemora.git
cd Mnemora
# 2. Install Node.js dependencies
npm install
# 3. Install Python dependencies
cd backend
pip install -r requirements.txt
cd ..Mnemora includes automatic setup — on first launch, it will:
- ✅ Check if Ollama is installed and running
- ✅ Guide you to install Ollama if needed
- ✅ Download required AI models automatically
- ✅ Start the application once ready
You need 3 terminals running:
# Terminal 1: Start Ollama
ollama serve
# Terminal 2: Start Python backend
cd backend
python main.py
# Terminal 3: Start the Electron app
npm run electron- Click the ➕ button next to "Indexed Folders"
- Select a folder containing your documents
- Wait for indexing to complete (progress bar shown)
- Start chatting with your files!
Simply type natural language questions in the chat:
- "What are the main themes in my notes?"
- "Summarize the key points from my resume"
- "What projects have I worked on?"
- "Find all mentions of machine learning"
┌─────────────────────────────────────────────────────────────┐
│ ELECTRON (Desktop) │
│ ┌────────────────────────────────────────────────────────┐ │
│ │ React + Tailwind CSS + Zustand │ │
│ │ Sidebar | Chat | Sources Panel | Settings Modal │ │
│ └────────────────────────────────────────────────────────┘ │
│ ↕ HTTP │
└─────────────────────────────────────────────────────────────┘
↕
┌─────────────────────────────────────────────────────────────┐
│ PYTHON BACKEND (FastAPI) │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌─────────────┐ │
│ │ Indexer │ │ RAG │ │ Parsers │ │ Vector Store│ │
│ │ │ │ Pipeline │ │ MD/PDF/ │ │ (ChromaDB) │ │
│ │ │ │ │ │ Code │ │ │ │
│ └──────────┘ └──────────┘ └──────────┘ └─────────────┘ │
│ ↕ HTTP │
└─────────────────────────────────────────────────────────────┘
↕
┌─────────────────────────────────────────────────────────────┐
│ OLLAMA │
│ Local LLM Inference + Embedding Generation │
│ (llama3.2, nomic-embed-text, etc.) │
└─────────────────────────────────────────────────────────────┘
mnemora/
├── electron/ # Electron main process
│ ├── main.js # Window management, IPC handlers
│ └── preload.js # Secure bridge to renderer
├── src/ # React frontend
│ ├── components/ # UI components
│ │ ├── Chat/ # ChatView, MessageBubble
│ │ ├── Sidebar/ # Folder management, status
│ │ ├── Sources/ # Citations panel
│ │ ├── Settings/ # Model selection, theme
│ │ └── Setup/ # First-run setup wizard
│ ├── stores/ # Zustand state management
│ └── index.css # Tailwind + custom styles
├── backend/ # Python FastAPI backend
│ ├── api/ # REST endpoints
│ ├── services/ # Core business logic
│ │ ├── indexer.py # Document processing
│ │ ├── rag.py # RAG pipeline
│ │ ├── vector_store.py # ChromaDB wrapper
│ │ └── ollama_client.py# Ollama API client
│ └── parsers/ # Document parsers
│ ├── markdown_parser.py
│ ├── pdf_parser.py
│ └── code_parser.py
├── docs/ # Documentation & screenshots
├── SETUP.md # Detailed setup guide
├── CONTRIBUTING.md # Contribution guidelines
├── CODE_OF_CONDUCT.md # Community standards
└── SECURITY.md # Security policy
We welcome contributions! Please see our Contributing Guidelines for details.
- 🐛 Report Bugs — Open an issue with reproduction steps
- 💡 Suggest Features — Share your ideas for improvements
- 📝 Improve Docs — Help make our documentation better
- 🔧 Submit PRs — Fix bugs or add new features
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and test thoroughly
- Commit with descriptive messages:
git commit -m 'feat: add amazing feature' - Push to your fork:
git push origin feature/amazing-feature - Open a Pull Request
This project is licensed under the GNU General Public License v3.0 — see the LICENSE file for details.
This means you are free to:
- ✅ Use the software for any purpose
- ✅ Study and modify the source code
- ✅ Distribute copies
- ✅ Distribute modified versions
Under the condition that derivative works are also licensed under GPL-3.0.
- Ollama — For making local LLM inference accessible
- ChromaDB — For the excellent vector database
- Electron — For cross-platform desktop apps
- Tailwind CSS — For beautiful utility-first styling
- The open-source AI community for continuous inspiration
Made with ❤️ for privacy-conscious knowledge workers


