OS-Level Memory Layer for LLMs, AI Agents & Multi-Agent Systems with long-term, working, and external memory.
-
Updated
Jan 26, 2026 - Python
OS-Level Memory Layer for LLMs, AI Agents & Multi-Agent Systems with long-term, working, and external memory.
Mirix is a multi-agent personal assistant designed to track on-screen activities and answer user questions intelligently. By capturing real-time visual data and consolidating it into structured memories, Mirix transforms raw inputs into a rich knowledge base that adapts to your digital experiences.
User Profile-Based Long-Term Memory for AI Chatbot Applications.
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.
Awesome AI Memory | LLM Memory | A curated knowledge base on AI memory for LLMs and agents, covering long-term memory, reasoning, retrieval, and memory-native system design. Awesome-AI-Memory 是一个 集中式、持续更新的 AI 记忆知识库,系统性整理了与 大模型记忆(LLM Memory)与智能体记忆(Agent Memory) 相关的前沿研究、工程框架、系统设计、评测基准与真实应用实践。
Curated systems, benchmarks, and papers etc. on memory for LLMs/MLLMs --- long-term context, retrieval, and reasoning.
Give Claude Code photographic memory in ONE portable file. No database, no SQLite, no ChromaDB - just a single .mv2 file you can git commit, scp, or share. Native Rust core with sub-ms operations.
[COLM 2025] Know Me, Respond to Me: Benchmarking LLMs for Dynamic User Profiling and Personalized Responses at Scale
HaluMem is the first operation level hallucination evaluation benchmark tailored to agent memory systems.
✨ mem0 MCP Server: A memory system using mem0 for AI applications with model context protocl (MCP) integration. Enables long-term memory for AI agents as a drop-in MCP server.
Reproducible Structured Memory for LLMs
Reliable and Efficient Semantic Prompt Caching with vCache
Predictive memory layer for AI agents. MongoDB + Qdrant + Neo4j with multi-tier caching, custom schema support & GraphQL. 91% Stanford STARK accuracy, <100ms on-device retrieval
Code and data for VTCBench, a VLM benchmark for long-context understanding capabilities under vision-text compression paradigm.
Universal infinite memory layer for Developer AI assistants. One shared brain across Claude, Cursor, Windsurf & more. 100% local, built on MCP standard. Stop re-explaining context
A MCP (Model Context Protocol) server providing long-term memory for LLMs
Self-consolidating semantic memory for AI agents with Pydantic schemas, intelligent deduplication, and FAISS vector search.
A simple MCP server that stores and retrieves memories from multiple LLMs.
a human-like, self-evolving, self-cleaned AI memory system for LLM
Official Python SDK for SwastikAI
Add a description, image, and links to the llm-memory topic page so that developers can more easily learn about it.
To associate your repository with the llm-memory topic, visit your repo's landing page and select "manage topics."