-
Notifications
You must be signed in to change notification settings - Fork 44.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate Qdrant In-Memory Storage into LangChain #273
Comments
Do you think Qdrant or Chroma would be the best choice for this? I've been experimenting with Chroma and the API and client are MUCH cleaner than Qdrant. |
@davelm42 Perhaps a new feature? |
Yea, I think that's what I'm going to work on this weekend unless someone beats me to it. |
I was going to suggest Weaviate, Llama-index/gpt-index already supports it -- I am unfamiliar with these others. I am starting a local branch to add the Weaviate client, I'll abstract the memory interface and see if I can make it work without breaking anything. |
This appears to be a sample of the current langchain vector stores:
|
Looks like someone has a Weaviate PR out there right now: |
Closing as resolved with #424. Please reopen if you disagree. |
Co-authored-by: Auto-GPT-Bot <github-bot@agpt.co> Co-authored-by: nerfZael <bogunovij@gmail.com>
I propose integrating Qdrant's in-memory storage capabilities into LangChain to enable running Auto-GPT in local mode without requiring a server and enhance the memory feature of Auto-GPT.
Qdrant is an open-source vector search engine and vector database with in-memory payload storage for fast search speeds. By integrating Qdrant's in-memory storage into LangChain, we can enable local mode for Auto-GPT without the need for a server. This would not only benefit developers who want to run Auto-GPT locally without incurring server costs but also enhance the memory feature of Auto-GPT.
The text was updated successfully, but these errors were encountered: