-
Notifications
You must be signed in to change notification settings - Fork 920
update langchain agent with qwen2.5 for better accuracy #2542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
openvino-dev-samples
commented
Nov 20, 2024
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@openvino-dev-samples please update test patching:
|
fix ci issues
c43f4c5
to
82f0ca8
Compare
I think a small model may lead parsing error on output of agent. |
@openvino-dev-samples ok, then I think it will be better to remove patching and disable notebook for precommit as we can not run 7B models due to github limitations (it is still will validated in internal infra with more powerful hw) |
I'dont know if "TinyLlama/TinyLlama-1.1B-Chat-v1.0" can pass the CI, let me try |
what about Qwen2.5-1.5B-Instruct? do you think it less accurate than tinyllama? |
It depends on how good LLM can follow prompt's instruction, Let me try ting-llama first |
6346c74
to
683a22b
Compare
replace llm test case replace llm test case replace llm test case replace llm test case
683a22b
to
d68c2ad
Compare
replace optimum-cli method
6befd3f
to
e5fee63
Compare