-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add support for langchain-anthropic in LLMEndpoint completes CORE-144 #2991
Conversation
This commit adds support for the langchain-anthropic language model in the LLMEndpoint class. The function has been updated to include the new model names. Now, when configuring the LLMEndpoint, if the model name starts with claude, the ChatAnthropic class from langchain-anthropic will be used instead of ChatOpenAI from langchain-openai.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
fix core-142 --------- Co-authored-by: Stan Girard <girard.stanislas@gmail.com> Co-authored-by: aminediro <aminedirhoussi1@gmail.com>
# Description - Complete rewrite of Syncutils - Eager processing Syncs on Create and Update Sync. - Simplified a LOT of filtering logic on files - Fixed notifier async knowledge
I have made changes to the knowledge table schema and models to improve functionality and data organization. This includes adding new columns, modifying existing columns, and updating relationships. These changes will enhance the overall performance and usability of the application. --------- Co-authored-by: Stan Girard <girard.stanislas@gmail.com> Co-authored-by: aminediro <aminedirhoussi1@gmail.com>
Needs export UV_INDEX_STRATEGY=unsafe-first-match to work Completes CORE-153
This commit adds the langchain-anthropic dependency to the project's pyproject.toml file. The version specified is 0.1.23.
@@ -13,7 +13,8 @@ dependencies = [ | |||
"rich>=13.7.1", | |||
"tiktoken>=0.7.0", | |||
"aiofiles>=24.1.0", | |||
"langchain-community>=0.2.12" | |||
"langchain-community>=0.2.12", | |||
"langchain-anthropic>=0.1.23", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
anthropic is present in the core/base environment ?
backend/core/tests/test_quivr_rag.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The quivrqa tests depends n a fixture generated from openai endpoint. Maybe we can generate fixture file for claude so that the mock response closely matches the api endpoint 👍🏼
api_key=SecretStr(config.llm_api_key) if config.llm_api_key else None, | ||
base_url=config.llm_base_url, | ||
) | ||
if config.model.startswith("claude"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to reduce code boilerplate. if and only if the class have the exact params:
if config.model.startswith("claude"): | |
if config.model.startswith("claude"): | |
from langchain_anthropic import ChatAnthropic | |
cls = ChatAnthropic | |
else: | |
cls = ChaatOpenAI |
Thanks for your contributions, we'll be closing this PR as it has gone stale. Feel free to reopen if you'd like to continue the discussion. |
This commit adds support for the langchain-anthropic language model in the LLMEndpoint class. The function has been updated to include the new model names. Now, when configuring the LLMEndpoint, if the model name starts with claude, the ChatAnthropic class from langchain-anthropic will be used instead of ChatOpenAI from langchain-openai.
Description
Please include a summary of the changes and the related issue. Please also include relevant motivation and context.
Checklist before requesting a review
Please delete options that are not relevant.
Screenshots (if appropriate):