Skip to content

Doc: Mem0 Notebook #78

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ If there are guides or examples that you'd like to see in the future, feel free
- [CrewAI with Groq](/tutorials/crewai-mixture-of-agents): Build a mixture of agents application with CrewAI.
- [E2B with Groq](/tutorials/e2b-code-interpreting): Build code execution with the Code Interpreter SDK by E2B.
- [JigsawStack with Groq](/tutorials/jigsawstack-prompt-engine): Learn how to automate your workflow and choose the best LLM for your prompts using JigsawStack's Prompt Engine powered by Groq.
- [Mem0 with Groq](/tutorials/mem0-groq/mem0-groq-tutorial.ipynb): Build memory-augmented agents using Mem0's persistent memory with Groq-powered LLMs.
- [Langroid with Groq](/tutorials/langroid-llm-agents): Create a multi-agent system using Langroid and Groq.
- [LiteLLM Proxy with Groq](/tutorials/litellm-proxy-groq): Call Groq through the LiteLLM proxy.
- [Toolhouse with Groq](/tutorials/toolhouse-for-tool-use-with-groq-api): Use Toolhouse to create simple tool integrations with Groq.
Expand Down
21 changes: 21 additions & 0 deletions tutorials/mem0-groq/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Mem0 + Groq Integration

This project demonstrates how to build high-performance, memory-augmented agents using **[Mem0](https://mem0.ai)** with **Groq**-accelerated LLMs.

Mem0 enables agents to persist and recall contextual memories, while Groq delivers ultra-fast inference using LPU-backed models. Combined, they provide the foundation for scalable and responsive intelligent systems.

## 🚀 Features

- **Persistent Memory**: Store and retrieve chat history, context, and metadata for long-term recall.
- **Groq-Powered LLMs**: Integrate with blazing-fast models with near real-time response capabilities.
- **Flexible Configuration**: Swap model providers and tweak parameters with ease.
- **Semantic Memory Search**: Use OpenAI embeddings to fetch the most relevant past interactions.

## 📚 Resources

- [Mem0 + Groq Integration Guide](https://docs.mem0.ai/components/llms/models/groq)
- [How Memory Works in Mem0](https://docs.mem0.ai/core-concepts/memory-operations)

---

For implementation details, refer to the accompanying notebook.
166 changes: 166 additions & 0 deletions tutorials/mem0-groq/mem0-groq-tutorial.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 🧠 Enabling Mem0 with Groq\n",
"\n",
"[**Mem0**](https://mem0.ai/) is a memory-augmented framework designed to build intelligent agents with contextual awareness. It allows developers to effortlessly store and retrieve conversational memories. Now with **Groq** integration, agents can harness blazing-fast, LPU-powered LLMs for ultra-responsive experiences.\n",
"\n",
"## Key Features\n",
"\n",
"- **Persistent Memory** : Store chat history, contextual information, and metadata for long-term recall.\n",
"\n",
"- **Groq-Powered LLMs** : Leverage ultra-fast models like `llama3-70b-8192` with minimal latency, thanks to Groq's LPU acceleration.\n",
"\n",
"- **Flexible Configuration** : Easily switch between different LLM providers and customize model parameters to fit your needs.\n",
"\n",
"- **Semantic Search** : Retrieve relevant memories using OpenAI embeddings for enhanced response accuracy.\n",
"\n",
"Let's get started with the example!\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 1: Install the packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install mem0ai groq"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 2: Set the API Keys"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-openai-api-key\" # Used for embeddings – Mem0 supports Gemini, HuggingFace, and more.\n",
"os.environ[\"GROQ_API_KEY\"] = \"your-groq-api-key\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 3: Create a Groq-backed Memory instance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from mem0 import Memory\n",
"\n",
"# Configuration for Groq-backed LLM\n",
"config = {\n",
" \"llm\": {\n",
" \"provider\": \"groq\",\n",
" \"config\": {\n",
" \"model\": \"llama3-70b-8192\",\n",
" \"temperature\": 0.1,\n",
" \"max_tokens\": 2000,\n",
" }\n",
" }\n",
"}\n",
"\n",
"# Initialize Memory\n",
"m = Memory.from_config(config)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 💡 **How it works**\n",
"\n",
"Mem0 uses the LLM (via Groq) to process and structure memory, such as turning conversations into memory items or updating existing ones. The framework handles memory operations like storing, retrieving, and filtering user context, and is designed to be independent of response generation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 4: Add messages to memory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"user\", \"content\": \"I'm planning to watch a movie tonight. Any recommendations?\"},\n",
" {\"role\": \"assistant\", \"content\": \"How about a thriller movie? They can be quite engaging.\"},\n",
" {\"role\": \"user\", \"content\": \"I’m not a big fan of thrillers but I love sci-fi movies.\"},\n",
" {\"role\": \"assistant\", \"content\": \"Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future.\"}\n",
"]\n",
"\n",
"m.add(messages, user_id=\"alice\", metadata={\"category\": \"movies\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 5: Search memories"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"results = m.search(\"Which movie should I watch?\", user_id=\"alice\")\n",
"\n",
"for memory in results[\"results\"]:\n",
" print(memory[\"memory\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Resources\n",
"\n",
"- [**Mem0 + Groq Integration Guide**]((https://docs.mem0.ai/components/llms/models/groq)) : Learn how to configure and run Mem0 with Groq-backed LLMs for ultra-low latency \n",
"\n",
"- [**Understanding Memory in Mem0**](https://docs.mem0.ai/core-concepts/memory-operations) : Dive into how memory is stored, retrieved, and managed in Mem0"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}