An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context.
- Features
- Quick Start
- Docker Compose Setup
- Web Interface
- Configuration
- Acknowledgments
- Troubleshooting
-
search_documentation
- Search through the documentation using vector search
- Returns relevant chunks of documentation with source information
-
list_sources
- List all available documentation sources
- Provides metadata about each source
-
extract_urls
- Extract URLs from text and check if they're already in the documentation
- Useful for preventing duplicate documentation
-
remove_documentation
- Remove documentation from a specific source
- Cleans up outdated or irrelevant documentation
-
list_queue
- List all items in the processing queue
- Shows status of pending documentation processing
-
run_queue
- Process all items in the queue
- Automatically adds new documentation to the vector store
-
clear_queue
- Clear all items from the processing queue
- Useful for resetting the system
-
add_documentation
- Add new documentation to the processing queue
- Supports various formats and sources
The RAG Documentation tool is designed for:
- Enhancing AI responses with relevant documentation
- Building documentation-aware AI assistants
- Creating context-aware tooling for developers
- Implementing semantic documentation search
- Augmenting existing knowledge bases
The project includes a docker-compose.yml
file for easy containerized deployment. To start the services:
docker-compose up -d
To stop the services:
docker-compose down
The system includes a web interface that can be accessed after starting the Docker Compose services:
- Open your browser and navigate to:
http://localhost:3030
- The interface provides:
- Real-time queue monitoring
- Documentation source management
- Search interface for testing queries
- System status and health checks
The system uses Ollama as the default embedding provider for local embeddings generation, with OpenAI available as a fallback option. This setup prioritizes local processing while maintaining reliability through cloud-based fallback.
EMBEDDING_PROVIDER
: Choose the primary embedding provider ('ollama' or 'openai', default: 'ollama')EMBEDDING_MODEL
: Specify the model to use (optional)- For OpenAI: defaults to 'text-embedding-3-small'
- For Ollama: defaults to 'nomic-embed-text'
OPENAI_API_KEY
: Required when using OpenAI as providerFALLBACK_PROVIDER
: Optional backup provider ('ollama' or 'openai')FALLBACK_MODEL
: Optional model for fallback provider
Add this to your cline_mcp_settings.json
:
{
"mcpServers": {
"rag-docs": {
"command": "node",
"args": ["/path/to/your/mcp-ragdocs/build/index.js"],
"env": {
"EMBEDDING_PROVIDER": "ollama", // default
"EMBEDDING_MODEL": "nomic-embed-text", // optional
"OPENAI_API_KEY": "your-api-key-here", // required for fallback
"FALLBACK_PROVIDER": "openai", // recommended for reliability
"FALLBACK_MODEL": "nomic-embed-text", // optional
"QDRANT_URL": "http://localhost:6333"
},
"disabled": false,
"autoApprove": [
"search_documentation",
"list_sources",
"extract_urls",
"remove_documentation",
"list_queue",
"run_queue",
"clear_queue",
"add_documentation"
]
}
}
}
Add this to your claude_desktop_config.json
:
{
"mcpServers": {
"rag-docs": {
"command": "node",
"args": ["/path/to/your/mcp-ragdocs/build/index.js"],
"env": {
"EMBEDDING_PROVIDER": "ollama", // default
"EMBEDDING_MODEL": "nomic-embed-text", // optional
"OPENAI_API_KEY": "your-api-key-here", // required for fallback
"FALLBACK_PROVIDER": "openai", // recommended for reliability
"FALLBACK_MODEL": "nomic-embed-text", // optional
"QDRANT_URL": "http://localhost:6333"
}
}
}
}
The system uses Ollama by default for efficient local embedding generation. For optimal reliability:
- Install and run Ollama locally
- Configure OpenAI as fallback (recommended):
{ // Ollama is used by default, no need to specify EMBEDDING_PROVIDER "EMBEDDING_MODEL": "nomic-embed-text", // optional "FALLBACK_PROVIDER": "openai", "FALLBACK_MODEL": "text-embedding-3-small", "OPENAI_API_KEY": "your-api-key-here" }
This configuration ensures:
- Fast, local embedding generation with Ollama
- Automatic fallback to OpenAI if Ollama fails
- No external API calls unless necessary
Note: The system will automatically use the appropriate vector dimensions based on the provider:
- Ollama (nomic-embed-text): 768 dimensions
- OpenAI (text-embedding-3-small): 1536 dimensions
This project is a fork of qpd-v/mcp-ragdocs, originally developed by qpd-v. The original project provided the foundation for this implementation.
Special thanks to the original creator, qpd-v, for their innovative work on the initial version of this MCP server. This fork has been enhanced with additional features and improvements by Rahul Retnan.
If the MCP server fails to start due to a port conflict, follow these steps:
- Identify and kill the process using port 3030:
npx kill-port 3030
-
Restart the MCP server
-
If the issue persists, check for other processes using the port:
lsof -i :3030
- You can also change the default port in the configuration if needed
The server includes a custom logging system that safely handles logs when running in stdio mode. This prevents console.log/error statements from interfering with MCP's JSON-RPC communication.
- INFO: General operational information
- ERROR: Critical failures and errors
- WARN: Warning conditions
- DEBUG: Detailed debugging information (only active in debug mode)
Enable debug mode to see additional logs:
# Run with debug mode enabled
npm run start:debug
# Debug mode with stdio transport (for MCP)
npm run start:stdio:debug
When running in debug mode or with stdio transport, logs are written to:
/path/to/mcp-ragdocs/logs/mcp-ragdocs.log
This is especially useful for troubleshooting when integrated with Claude or other MCP clients.