A vector database adapter for ElizaOS that provides efficient similarity search capabilities through Qdrant, optimized for knowledge management and semantic search operations.
- Vector similarity search with cosine distance
- Efficient knowledge base management
- Built-in text preprocessing for better search quality
- UUID v5 compatibility for Qdrant IDs
- In-memory caching system
- Content metadata support
- Shared knowledge management
- Qdrant server (self-hosted or cloud)
- Node.js 23 or later
- ElizaOS installation
npm install @elizaos-plugins/adapter-qdrant
Add the adapter to your ElizaOS configuration:
{
"plugins": ["@elizaos-plugins/adapter-qdrant"],
"settings": {
"QDRANT_URL": "your-qdrant-server-url",
"QDRANT_KEY": "your-qdrant-api-key",
"QDRANT_PORT": "6333",
"QDRANT_VECTOR_SIZE": "1536" // Adjust based on your embedding size
}
}
QDRANT_URL
: URL of your Qdrant serverQDRANT_KEY
: API key for authenticationQDRANT_PORT
: Port number for Qdrant serverQDRANT_VECTOR_SIZE
: Dimension of your vectors
The adapter provides specialized vector search capabilities:
- Cosine similarity search
- Configurable vector dimensions
- Support for multiple embedding types
- Cache support for frequent searches
Knowledge items are stored with:
- Vector embeddings for similarity search
- Metadata support for additional information
- Shared/private knowledge separation
- Content versioning through chunk management
Built-in text preprocessing for better search quality:
- Code block removal
- URL normalization
- Markdown cleanup
- Special character handling
- Whitespace normalization
Efficient in-memory caching:
- Per-agent cache isolation
- UUID-based cache keys
- Automatic cache management
This adapter primarily implements:
- Knowledge management
- Vector similarity search
- Caching operations
Other database functions (like memory management, participant tracking, etc.) are stubbed but not implemented. Use a different adapter if you need these features.
The adapter automatically manages:
- Collection creation
- Vector indexes
- Point upserts with payload
- Set correct vector dimensions based on your embedding model
- Use consistent embedding generation
- Consider caching for frequent searches
- Monitor memory usage with large cache sizes
Yes, configure QDRANT_VECTOR_SIZE based on your embedding model's output size.
No, caching is optional but recommended for performance when doing repeated searches.
Yes, use the isShared flag in knowledge metadata for shared content.
Configure the vector size to match your model's output dimensions and ensure consistent preprocessing.
Yes, through per-agent isolation of knowledge and cache.
The adapter focuses on vector similarity search; use MongoDB or PostgreSQL adapters for full-text search.
Yes, through Qdrant's native sharding capabilities.