Skip to content

modelscope/sirchmunk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

153 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Sirchmunk Logo

Sirchmunk: Raw data to self-evolving intelligence, real-time.

Python FastAPI Next.js TailwindCSS DuckDB License ripgrep-all OpenAI Kreuzberg MCP

Quick Start Β· Key Features Β· Web UI Β· How it Works Β· FAQ

πŸ‡¨πŸ‡³ δΈ­ζ–‡

πŸ” Agentic Search Β β€’Β  🧠 Knowledge Clustering Β β€’Β  πŸ“Š Monte Carlo Evidence Sampling
⚑ Indexless Retrieval Β β€’Β  πŸ”„ Self-Evolving Knowledge Base Β β€’Β  πŸ’¬ Real-time Chat


🌰 Why β€œSirchmunkβ€οΌŸ

Intelligence pipelines built upon vector-based retrieval can be rigid and brittle. They rely on static vector embeddings that are expensive to compute, blind to real-time changes, and detached from the raw context. We introduce Sirchmunk to usher in a more agile paradigm, where data is no longer treated as a snapshot, and insights can evolve together with the data.


✨ Key Features

1. Embedding-Free: Data in its Purest Form

Sirchmunk works directly with raw data -- bypassing the heavy overhead of squeezing your rich files into fixed-dimensional vectors.

  • Instant Search: Eliminating complex pre-processing pipelines in hours long indexing; just drop your files and search immediately.
  • Full Fidelity: Zero information loss β€”- stay true to your data without vector approximation.

2. Self-Evolving: A Living Index

Data is a stream, not a snapshot. Sirchmunk is dynamic by design, while vector DB can become obsolete the moment your data changes.

  • Context-Aware: Evolves in real-time with your data context.
  • LLM-Powered Autonomy: Designed for Agents that perceive data as it lives, utilizing token-efficient reasoning that triggers LLM inference only when necessary to maximize intelligence while minimizing cost.

3. Intelligence at Scale: Real-Time & Massive

Sirchmunk bridges massive local repositories and the web with high-scale throughput and real-time awareness.
It serves as a unified intelligent hub for AI agents, delivering deep insights across vast datasets at the speed of thought.


Traditional RAG vs. Sirchmunk

Dimension Traditional RAG ✨Sirchmunk
πŸ’° Setup Cost High Overhead
(VectorDB, GraphDB, Complex Document Parser...)
βœ… Zero Infrastructure
Direct-to-data retrieval without vector silos
πŸ•’ Data Freshness Stale (Batch Re-indexing) βœ… Instant & Dynamic
Self-evolving index that reflects live changes
πŸ“ˆ Scalability Linear Cost Growth βœ… Extremely low RAM/CPU consumption
Native Elastic Support, efficiently handles large-scale datasets
🎯 Accuracy Approximate Vector Matches βœ… Deterministic & Contextual
Hybrid logic ensuring semantic precision
βš™οΈ Workflow Complex ETL Pipelines βœ… Drop-and-Search
Zero-config integration for rapid deployment

Demonstration

Sirchmunk WebUI

Access files directly to start chatting


πŸŽ‰ News

  • πŸš€ Feb 5, 2026: Release v0.0.2 β€” MCP Support, CLI Commands & Knowledge Persistence!

    • MCP Integration: Full Model Context Protocol support, works seamlessly with Claude Desktop and Cursor IDE.
    • CLI Commands: New sirchmunk CLI with init, config, serve, and search commands.
    • KnowledgeCluster Persistence: DuckDB-powered storage with Parquet export for efficient knowledge management.
    • Knowledge Reuse: Semantic similarity-based cluster retrieval for faster searches via embedding vectors.
  • πŸŽ‰πŸŽ‰ Jan 22, 2026: Introducing Sirchmunk: Initial Release v0.0.1 Now Available!


πŸš€ Quick Start

Prerequisites

  • Python 3.10+
  • LLM API Key (OpenAI-compatible endpoint, local or remote)
  • Node.js 18+ (Optional, for web interface)

Installation

# Create virtual environment (recommended)
conda create -n sirchmunk python=3.13 -y && conda activate sirchmunk 

pip install sirchmunk

# Or via UV:
uv pip install sirchmunk

# Alternatively, install from source:
git clone https://github.com/modelscope/sirchmunk.git && cd sirchmunk
pip install -e .

Python SDK Usage

import asyncio

from sirchmunk import AgenticSearch
from sirchmunk.llm import OpenAIChat

llm = OpenAIChat(
        api_key="your-api-key",
        base_url="your-base-url",   # e.g., https://api.openai.com/v1
        model="your-model-name"     # e.g., gpt-4o
    )

async def main():
    
    searcher = AgenticSearch(llm=llm)
    
    result: str = await searcher.search(
        query="How does transformer attention work?",
        search_paths=["/path/to/documents"],
    )
    
    print(result)

asyncio.run(main())

⚠️ Notes:

  • Upon initialization, AgenticSearch automatically checks if ripgrep-all and ripgrep are installed. If they are missing, it will attempt to install them automatically. If the automatic installation fails, please install them manually.
  • Replace "your-api-key", "your-base-url", "your-model-name" and /path/to/documents with your actual values.

Command Line Interface

Sirchmunk provides a powerful CLI for server management and search operations.

Installation

pip install "sirchmunk[web]"

# or install via UV
uv pip install "sirchmunk[web]"

Initialize

# Initialize Sirchmunk with default settings (Default work path: `~/.sirchmunk/`)
sirchmunk init

# Alternatively, initialize with custom work path
sirchmunk init --work-path /path/to/workspace

Configure

# Show current configuration
sirchmunk config

# Regenerate configuration file if needed (Default config file: ~/.sirchmunk/.env)
sirchmunk config --generate

Start API Server

# Start server with default settings
sirchmunk serve

# Custom host and port
sirchmunk serve --host 0.0.0.0 --port 8000

# Development mode with auto-reload
sirchmunk serve --reload

Search

# Search in current directory
sirchmunk search "How does authentication work?"

# Search in specific paths
sirchmunk search "find all API endpoints" ./src ./docs

# Quick filename search
sirchmunk search "config" --mode FILENAME_ONLY

# Output as JSON
sirchmunk search "database schema" --output json

# Use API server (requires running server)
sirchmunk search "query" --api --api-url http://localhost:8584

Available Commands

Command Description
sirchmunk init Initialize working directory and configuration
sirchmunk config Show or generate configuration
sirchmunk serve Start the API server
sirchmunk search Perform search queries
sirchmunk version Show version information

πŸ”Œ MCP Server

Sirchmunk provides a Model Context Protocol (MCP) server that exposes its intelligent search capabilities as MCP tools. This enables seamless integration with AI assistants like Claude Desktop and Cursor IDE.

Quick Start

# Install MCP package
pip install sirchmunk-mcp

# Initialize and configure
sirchmunk-mcp init
sirchmunk-mcp config --generate

# Edit ~/.sirchmunk/.mcp_env with your LLM API key

# Test with MCP Inspector
npx @modelcontextprotocol/inspector sirchmunk-mcp serve

Features

  • Multi-Mode Search: DEEP mode for comprehensive analysis, FILENAME_ONLY for fast file discovery
  • Knowledge Cluster Management: Automatic extraction, storage, and reuse of knowledge
  • Standard MCP Protocol: Works with stdio and Streamable HTTP transports

πŸ“– For detailed documentation, see Sirchmunk MCP README.


πŸ–₯️ Web UI

The web UI is built for fast, transparent workflows: chat, knowledge analytics, and system monitoring in one place.

Sirchmunk Home

Home β€” Chat with streaming logs, file-based RAG, and session management.

Sirchmunk Monitor

Monitor β€” System health, chat activity, knowledge analytics, and LLM usage.

Installation

git clone https://github.com/modelscope/sirchmunk.git && cd sirchmunk

pip install ".[web]"

npm install --prefix web
  • Note: Node.js 18+ is required for the web interface.

Running the Web UI

# Start frontend and backend
python scripts/start_web.py 

# Stop frontend and backend
python scripts/stop_web.py

Access the web UI at (By default):

Configuration:

  • Access Settings β†’ Envrionment Variables to configure LLM API, and other parameters.

πŸ—οΈ How it Works

Sirchmunk Framework

Sirchmunk Architecture

Core Components

Component Description
AgenticSearch Search orchestrator with LLM-enhanced retrieval capabilities
KnowledgeBase Transforms raw results into structured knowledge clusters with evidences
EvidenceProcessor Evidence processing based on the MonteCarlo Importance Sampling
GrepRetriever High-performance indexless file search with parallel processing
OpenAIChat Unified LLM interface supporting streaming and usage tracking
MonitorTracker Real-time system and application metrics collection

Data Storage

All persistent data is stored in the configured SIRCHMUNK_WORK_PATH (default: ~/.sirchmunk/):

{SIRCHMUNK_WORK_PATH}/
  β”œβ”€β”€ .cache/
    β”œβ”€β”€ history/              # Chat session history (DuckDB)
    β”‚   └── chat_history.db
    β”œβ”€β”€ knowledge/            # Knowledge clusters (Parquet)
    β”‚   └── knowledge_clusters.parquet
    └── settings/             # User settings (DuckDB)
        └── settings.db


❓ FAQ

How is this different from traditional RAG systems?

Sirchmunk takes an indexless approach:

  1. No pre-indexing: Direct file search without vector database setup
  2. Self-evolving: Knowledge clusters evolve based on search patterns
  3. Multi-level retrieval: Adaptive keyword granularity for better recall
  4. Evidence-based: Monte Carlo sampling for precise content extraction
What LLM providers are supported?

Any OpenAI-compatible API endpoint, including (but not limited too):

  • OpenAI (GPT-4, GPT-4o, GPT-3.5)
  • Local models served via Ollama, llama.cpp, vLLM, SGLang etc.
  • Claude via API proxy
How do I add documents to search?

Simply specify the path in your search query:

result = await searcher.search(
    query="Your question",
    search_paths=["/path/to/folder", "/path/to/file.pdf"]
)

No pre-processing or indexing required!

Where are knowledge clusters stored?

Knowledge clusters are persisted in Parquet format at:

{SIRCHMUNK_WORK_PATH}/.cache/knowledge/knowledge_clusters.parquet

You can query them using DuckDB or the KnowledgeManager API.

How do I monitor LLM token usage?
  1. Web Dashboard: Visit the Monitor page for real-time statistics
  2. API: GET /api/v1/monitor/llm returns usage metrics
  3. Code: Access searcher.llm_usages after search completion

πŸ“‹ Roadmap

  • Text-retrieval from raw files
  • Knowledge structuring & persistence
  • Real-time chat with RAG
  • Web UI support
  • Web search integration
  • Multi-modal support (images, videos)
  • Distributed search across nodes
  • Knowledge visualization and deep analytics
  • More file type support

🀝 Contributing

We welcome contributions !


πŸ“„ License

This project is licensed under the Apache License 2.0.


ModelScope Β· ⭐ Star us Β· πŸ› Report a bug Β· πŸ’¬ Discussions

✨ Sirchmunk: Raw data to self-evolving intelligence, real-time.

❀️ Thanks for Visiting ✨ Sirchmunk !

Views

About

Sirchmunk: Raw data to self-evolving intelligence, real-time.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •