📊 View Detailed Performance Benchmarks
| Component | Metric | Current | Target | Status |
|---|---|---|---|---|
| Document Processing | ||||
| Indexing Speed | Files/sec | 15.3 | >10 | ✅ |
| Chunk Processing | Chunks/sec | 48.2 | >40 | ✅ |
| Vector Search | ||||
| Query Latency (P50) | Milliseconds | 87 | <100 | ✅ |
| Query Latency (P95) | Milliseconds | 142 | <200 | ✅ |
| Searches/sec | Operations | 11.5 | >10 | ✅ |
| Semantic Cache | ||||
| Hit Rate | Percentage | 34.2% | >31% | ✅ |
| Read Latency | Milliseconds | 12 | <20 | ✅ |
| Write Latency | Milliseconds | 45 | <250 | ✅ |
Performance metrics are automatically updated by CI/CD pipeline. Last update: see workflow runs
EOL is a comprehensive AI framework for building intelligent, context-aware applications.
This is a monorepo containing multiple packages for building intelligent AI applications with retrieval-augmented generation (RAG) capabilities.
- 🚀 Ultra-Fast Dependencies: Advanced wheel caching for 3-6x faster CI/CD builds
- 🔍 Intelligent Indexing: Content-aware chunking with AST parsing for code and semantic splitting for text
- 📊 Redis Vector Database: High-performance semantic search using Redis Stack v8
- 🧠 Semantic Caching: 31% cache hit rate target to reduce LLM API calls
- 🔗 Knowledge Graphs: Entity relationship mapping for enhanced context
- 📡 MCP Integration: Model Context Protocol server for seamless AI integration
- 👁️ File Watching: Auto-indexing with real-time change detection
- Python 3.13 or higher
- Redis Stack v8+ (with vector search module)
- UV package manager
This project uses UV for dependency management and workspace management.
# Install dependencies
uv sync --all-packages
# Run tests
./test_all.shGPL-3.0