-
Notifications
You must be signed in to change notification settings - Fork 44.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Long-term memory via vector database #74
Comments
GREAT idea. Reason why this is great to clarify for others: It'll make much more efficient use of limited context window when calling GPT to restrict what's loaded into there from long term memory to be stuff that is likely most relevant. And it'll allow for pretty much indefinite length of long term memory that can be used. Think of this as an "associative memory", where the association comes from figuring out what in long term memory is closest in conceptual space compared to the current context. |
Seconded. I think this is a must. |
I have a PR for this see #122 |
Found this open source vector db https://github.com/spotify/annoy , thoughts? |
also this, https://github.com/milvus-io/milvus |
i have a PR for this, #801 |
implemented |
Co-authored-by: Silen Naihin <silen.naihin@gmail.com>
https://www.mlq.ai/gpt-4-pinecone-website-ai-assistant/
Using a vector database like Pinecone, a user ought be able to store memory for the 'bot and query it as needed.
The text was updated successfully, but these errors were encountered: