Open-source context retrieval layer for AI agents
-
Updated
May 15, 2026 - Python
Open-source context retrieval layer for AI agents
Local persistent memory store for LLM applications including claude desktop, github copilot, codex, antigravity, etc.
14-stage Fusion Pipeline for LLM token compression — reversible compression, AST-aware code analysis, intelligent content routing. Zero LLM inference cost. MIT licensed.
Semantica 🧠 — AI-native knowledge graph intelligence framework for semantic retrieval, ontology reasoning, context graphs, and explainable AI systems.
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require approvals, and produce audit-ready decision trails.
Open-source protocol suite standardizing LLM, Vector, Graph, and Embedding infrastructure across LangChain, LlamaIndex, AutoGen, CrewAI, Semantic Kernel, and MCP. 3,330+ conformance tests. One protocol. Any framework. Any provider.
Grov automatically captures the context from your private AI sessions and syncs it to a shared team memory. It auto injects relevant memories across developers and future sessions to save tokens and time spent on tasks.
Local-first AI conversation memory hub to capture, search, summarize, and export chats across major AI platforms. 本地优先的 AI 对话记忆与知识中台。
AI Infrastructure Engineer Learning Track - Production ML infrastructure curriculum (2-4 years experience)
The open-source intelligence layer for AI native development
The open source, no-code MCP Server for AI-Native API Access
Route inference across LLM providers. Track cost per request.
Local-first AI agent runtime for workspaces, wakeups, and long-running task execution
Unified AI Gateway for 30+ LLMs (OpenAI, Anthropic, Bedrock, Azure etc) with Caching, Guardrails, A/B test & cost controls. Go-native Fastest & Scalable AI Gateway LiteLLM & Kong AI Gateway alternative.
Kubernetes operator for local LLM inference with llama.cpp, vLLM, and TGI - multi-GPU, autoscaling, air-gapped, production-ready
启智平台任务管理 CLI:资源查询、任务提交、日志查看和 MCP/agent workflow
Distributed data mesh for real-time access, migration, and replication across diverse databases — built for AI, security, and scale.
⚓️ Kubernetes-Native Database-Driven Provisioning Workflow Automation
A Rust runtime that unifies relational tables, graph relationships, and vector embeddings in a single tensor-based storage layer with distributed consensus and semantic search
Add a description, image, and links to the ai-infrastructure topic page so that developers can more easily learn about it.
To associate your repository with the ai-infrastructure topic, visit your repo's landing page and select "manage topics."