TACHIKOMA-OS is a modular AI ecosystem that combines a memory graph (GraphRAG), intelligent agents with tool capabilities, and automatic model selection based on available VRAM.
Available as:
- π Web Application (React/Vite)
- π₯οΈ Desktop Application (Windows, Linux, macOS via Tauri)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β TACHIKOMA-OS β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β User UI β β Admin UI β β Z-Brain β β
β β (React) β β (React) β β (CLI) β β
β β :5173 β β :5174 β β β β
β ββββββββ¬βββββββ ββββββββ¬βββββββ ββββββββ¬βββββββ β
β β β β β
β ββββββββββββββββββ΄βββββββββββββββββ β
β β β
β βββββββββββββ΄ββββββββββββ β
β β API Gateway (Axum) β β
β β :3000 β β
β βββββββββββββ¬ββββββββββββ β
β β β
β βββββββββββββββββββββββββ΄ββββββββββββββββββββββββ β
β β Microservices Layer β β
β β βββββββββββ βββββββββββ βββββββββββ β β
β β β Chat β β Memory β β Agent β β β
β β β :3003 β β :3004 β β :3005 β β β
β β βββββββββββ βββββββββββ βββββββββββ β β
β β βββββββββββ βββββββββββ βββββββββββ β β
β β βChecklst β β Music β β Voice β β β
β β β :3001 β β :3002 β β :8100 β β β
β β βββββββββββ βββββββββββ βββββββββββ β β
β βββββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β βββββββββββββββββββββββββ΄ββββββββββββββββββββββββ β
β β Infrastructure Layer β β
β β ββββββββββββββββ ββββββββββββββββ ββββββββββ β β
β β β SurrealDB β β Ollama β βSearxng β β β
β β β :8000 β β :11434 β β :8080 β β β
β β ββββββββββββββββ ββββββββββββββββ ββββββββββ β β
β βββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Service | Port | Description |
|---|---|---|
| tachikoma-backend | 3000 | Central API Gateway + LLM Gateway |
| tachikoma-checklists | 3001 | Checklist management |
| tachikoma-music | 3002 | YouTube music streaming |
| tachikoma-chat | 3003 | LLM conversations |
| tachikoma-memory | 3004 | GraphRAG semantic memory |
| tachikoma-agent | 3005 | AI agent tools |
| tachikoma-voice | 8100 | Piper TTS synthesis |
Ollama runs independently in the tachikoma-ollama project:
| Service | Port | Description |
|---|---|---|
| Ollama | 11434 | LLM inference server |
Important: All LLM operations (chat, embeddings, speculative decoding) go through tachikoma-backend's /api/llm/* endpoints. Microservices should NOT connect directly to Ollama.
| Service | Port | Description |
|---|---|---|
| tachikoma-kanban | 3006 | Kanban boards |
| tachikoma-note | 3007 | Notes + voice transcription |
| tachikoma-docs | 3008 | AI document generation (DOCX, XLSX, PPTX) |
| tachikoma-calendar | 3009 | Calendar + reminders |
| tachikoma-pomodoro | 3010 | Pomodoro timer |
| tachikoma-image | 3011 | AI image gallery |
- Graph + Vector Storage: Uses SurrealDB for both relationship graphs and vector embeddings
- 11 Relation Types: RelatedTo, Causes, PartOf, HasProperty, UsedFor, CapableOf, AtLocation, CreatedBy, DerivedFrom, SimilarTo, ContradictsWith
- Semantic Search: Find relevant memories using embedding similarity
- Automatic Memory Extraction: Extracts facts, preferences, and entities from conversations
- Automatic Model Selection: Chooses the best model based on available VRAM
ministral-3b(Fast) - Quick responses, <4GB VRAMqwen2.5:7b(Balanced) - Good quality, 4-8GB VRAMqwen2.5-coder:14b(Complex) - Best for coding, >8GB VRAM
- Built-in Tools:
search_web: Privacy-respecting web search via Searxngexecute_command: Safe local command execution (whitelisted)remember: Store facts in long-term memory
- Complete Axum-based API
- Endpoints:
/chat,/memories,/admin/graph,/agent,/system - CORS support for frontend applications
-
User UI: React + TypeScript + Tailwind chat interface
- π Web: Runs in browser (localhost:5173)
- π₯οΈ Desktop: Native app for Windows/Linux/macOS via Tauri
- Dark/Light mode
- i18n support (English/Spanish)
- Conversation history with grouping
- Typing indicators and markdown rendering
- Desktop build: See TACHIKOMA_DESKTOP_SETUP.md
-
Admin UI: Memory graph management dashboard
- Force-directed graph visualization (react-force-graph)
- Statistics dashboard with charts
- Memory CRUD operations
- System health monitoring
- Interactive shell for terminal-based interaction
- Command history with persistence
- Special commands:
/help,/new,/search,/models - Quick query mode:
zbrain "your question"
kibo/
βββ docker-compose.yml # Container orchestration
βββ docker-compose.dev.yml # Development overrides
βββ dev.sh # Development helper script
βββ config/
β βββ searxng/
β βββ settings.yml # Searxng configuration
βββ tachikoma-backend/ # API Gateway (Rust/Axum)
β βββ src/
β βββ domain/ # Entities, Value Objects
β βββ application/ # Business logic
β βββ infrastructure/ # API, DB, Adapters
βββ tachikoma-checklists/ # Checklist microservice
βββ tachikoma-music/ # Music streaming microservice
βββ tachikoma-chat/ # LLM chat microservice
βββ tachikoma-memory/ # GraphRAG memory microservice
βββ tachikoma-agent/ # Agent tools microservice
βββ tachikoma-voice/ # TTS microservice
βββ tachikoma-ui/ # User interface (React)
βββ tachikoma-admin/ # Admin dashboard (React)
βββ zbrain/ # CLI shell
- Docker & Docker Compose
- Node.js 18+
- Rust 1.75+
- NVIDIA GPU with CUDA (optional, for GPU acceleration)
cd kibo
cp .env.example .env
# Edit .env with your settingsFirst, clone and start tachikoma-ollama in a separate directory:
# In a separate project directory (not in kibo)
git clone https://github.com/madkoding/tachikoma-ollama.git
cd tachikoma-ollama
./setup.sh # Downloads models and starts Ollama# In the kibo directory
docker-compose up -d surrealdb searxngcd tachikoma-backend
cargo run --releaseWeb version:
cd tachikoma-ui
npm install
npm run devDesktop version:
cd tachikoma-ui
npm install
npm run tauri:dev # Development with hot-reload
# Or for production build:
npm run tauri:build # Generates native executableSee TACHIKOMA_DESKTOP_SETUP.md for complete desktop build guide.
cd tachikoma-admin
npm install
npm run devcd zbrain
cargo build --release
# Binary at target/release/zbrain
./target/release/zbrainPOST /api/chat- Send a message and get AI response
GET /api/memories- List all memoriesPOST /api/memories- Create a memoryGET /api/memories/search?query=...- Search memoriesGET /api/memories/:id- Get memory by IDDELETE /api/memories/:id- Delete memoryGET /api/memories/:id/related- Get related memories
GET /api/admin/graph- Get full memory graphGET /api/admin/graph/stats- Get graph statistics
GET /api/system/health- Health checkGET /api/system/models- List available modelsGET /api/system/vram- Get VRAM information
| Variable | Description | Default |
|---|---|---|
TACHIKOMA_API_PORT |
Backend port | 3000 |
SURREALDB_URL |
SurrealDB connection | ws://localhost:8000 |
SURREALDB_USER |
Database user | root |
SURREALDB_PASS |
Database password | root |
OLLAMA_URL |
Ollama API URL | http://localhost:11434 |
SEARXNG_URL |
Searxng URL | http://localhost:8080 |
FAST_MODEL |
Quick response model | ministral:3b |
BALANCED_MODEL |
Balanced model | qwen2.5:7b |
COMPLEX_MODEL |
Complex task model | qwen2.5-coder:14b |
EMBED_MODEL |
Embedding model | nomic-embed-text |
cd tachikoma-backend
cargo watch -x run # Auto-reload on changescd tachikoma-ui
npm run dev # Vite dev server with HMR# Backend tests
cd tachikoma-backend
cargo test
# Z-Brain tests
cd zbrain
cargo testMIT License - See LICENSE for details.
Contributions are welcome! Please read our contributing guidelines before submitting PRs.
Built with β€οΈ using Rust, React, and AI