MyMemory is a full-stack application designed to help you store, manage, and recall your memories using a conversational AI interface. It allows you to feed your memories into a secure system and then chat with an AI assistant that can retrieve and discuss these memories with context.
- Save Memories: Securely store your memories with a topic and detailed content.
- AI Chat Interface: Converse with an AI assistant (powered by Llama3 via Groq) to recall and discuss your stored memories.
- Semantic Search: Memories are retrieved based on semantic similarity to your queries, providing relevant results.
- Real-time Interaction: Uses Socket.IO for a responsive chat experience.
- Speech-to-Text: Input chat messages using your voice.
- Passkey Protection: The memory feeding section is protected by a passkey.
- Personalized Experience: Conversations are contextual and user-specific.
Backend:
- Framework: Flask (Python)
- Real-time Communication: Flask-SocketIO
- Vector Database: Pinecone
- Embeddings: Jina AI (jina-embeddings-v3)
- LLM Integration: Langchain, ChatGroq (llama3-8b-8192)
- Deployment: (Assumed, e.g., Koyeb as hinted in frontend
.env
)
Frontend:
- Framework: Next.js (React)
- Language: TypeScript
- Styling: Tailwind CSS
- UI Components: Shadcn UI
- Real-time Communication: Socket.IO Client
- Deployment: Vercel (implied by Vercel Analytics)
- Node.js (v18 or later recommended)
- Python (v3.8 or later recommended)
pip
(Python package installer)- Access to Pinecone, Jina AI, and Groq API keys.
-
Clone the repository:
git clone [https://github.com/your-username/spyrosigma-mymemory.git](https://github.com/your-username/spyrosigma-mymemory.git) cd spyrosigma-mymemory/backend
-
Create a virtual environment (recommended):
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables: Create a
.env
file in thebackend
directory and add the following:PINECONE_API_KEY="YOUR_PINECONE_API_KEY" JINA_API_KEY="YOUR_JINA_AI_API_KEY" GROQ_API_KEY="YOUR_GROQ_API_KEY" FEED_MEMORY_PASSKEY="YOUR_CHOSEN_PASSKEY_FOR_FEEDING_MEMORIES" SECRET_KEY="YOUR_FLASK_SECRET_KEY" # For Flask session management, e.g., generate with os.urandom(24).hex()
PINECONE_API_KEY
: Your API key for Pinecone.JINA_API_KEY
: Your API key for Jina AI (for embeddings).GROQ_API_KEY
: Your API key for Groq (for Llama3 model access).FEED_MEMORY_PASSKEY
: A secret passkey you define to protect the memory feeding functionality.SECRET_KEY
: A secret key for Flask app security.
-
Run the backend server: For development:
python app.py
For production (using Gunicorn, as it's in
requirements.txt
):gunicorn --worker-class geventwebsocket.gunicorn.workers.GeventWebSocketWorker -w 1 app:app
The backend will typically run on
http://127.0.0.1:5000
.
-
Navigate to the frontend directory:
cd ../frontend
-
Install dependencies:
npm install # or # yarn install
-
Set up environment variables: Create a
.env.local
file in thefrontend
directory (or rename/copy the existing.env
file). The primary variable needed is:NEXT_PUBLIC_BACKEND_URL="http://localhost:5000"
If you have deployed your backend, replace
http://localhost:5000
with your backend's public URL (e.g., the Koyeb URLhttps://petite-terra-hanumansingh-445ce1a4.koyeb.app/
found in your provided.env
). -
Run the frontend development server:
npm run dev # or # yarn dev
The frontend will typically run on
http://localhost:3000
.
- Input: The user provides a "Topic" and "Memory Details" through the frontend interface after unlocking with a passkey.
- API Call: The frontend sends this data to the
/save_memory/
endpoint on the Flask backend. - Embedding: The backend's
memory_save.py
script takes the memory text and topic, then uses the Jina AI API (jina-embeddings-v3
model) to generate vector embeddings. - Storage: These embeddings, along with the original text and topic as metadata, are upserted into a Pinecone vector index (
mymemory
index,October
namespace). Each memory gets a unique ID.
- Connection: The frontend establishes a Socket.IO connection with the backend. Each user is assigned a unique session ID.
- User Query: The user types a message or uses speech-to-text. The message is sent to the backend via Socket.IO (
user_message
event). - Query Embedding: The backend generates an embedding for the user's query using Jina AI.
- Semantic Search: This query embedding is used to search the Pinecone index for the most semantically similar memory (top_k=1). The text from the matching memory is retrieved as context.
- LLM Interaction:
- A system prompt is constructed, including the retrieved context.
- Langchain's
ConversationChain
withConversationBufferMemory
is used to maintain conversation history for the user. - The ChatGroq LLM (Llama3 model) generates a response based on the user's query, the retrieved context, and the conversation history.
- Response: The AI's response is sent back to the frontend via Socket.IO (
bot_response
event) and displayed in the chat interface.
GET /
: Welcome message.POST /validate-passkey/
: Validates the passkey for accessing the memory feeding section.- Request Body:
{ "passkey": "your_passkey" }
- Response:
{ "success": true }
or{ "success": false, "message": "Invalid passkey" }
- Request Body:
POST /save_memory/
: Saves a new memory.- Request Body:
{ "Topic": "memory_topic", "memory": "memory_content" }
- Response:
{ "message": "Memory saved successfully" }
or error.
- Request Body:
connect
: Client connects, auser_id
is generated and sent back.set_user_id
(emitted by server): Sends the uniqueuser_id
to the client.join
(emitted by client): Client joins a room based on theiruser_id
.disconnect
: Client disconnects.user_message
(emitted by client): Client sends a chat message.- Payload:
{ "data": "user_query_text", "user_id": "client_user_id" }
- Payload:
bot_response
(emitted by server): Server sends the AI's response.- Payload:
{ "data": "ai_response_text" }
- Payload:
Backend (backend/.env
):
PINECONE_API_KEY
: Your Pinecone API key.JINA_API_KEY
: Your Jina AI API key.GROQ_API_KEY
: Your Groq API key.FEED_MEMORY_PASSKEY
: Custom passkey for memory input.SECRET_KEY
: Flask secret key.
Frontend (frontend/.env.local
):
NEXT_PUBLIC_BACKEND_URL
: URL of the deployed or local backend (e.g.,http://localhost:5000
).
Contributions are welcome! If you'd like to contribute, please follow these steps:
- Fork the repository.
- Create a new branch (
git checkout -b feature/your-feature-name
). - Make your changes.
- Commit your changes (
git commit -m 'Add some feature'
). - Push to the branch (
git push origin feature/your-feature-name
). - Open a Pull Request.
Please ensure your code follows the existing style and includes tests where appropriate.
- Satyam (SpyroSigma)
- Portfolio: spyrosigma.tech
MIT License Copyright (c) 2025 SpyroSigma
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.