Closed
Description
openedon Dec 8, 2023
Feature request
recently, https://github.com/ggerganov/llama.cpp has add support for both QWEN and Baichuan2.
It has added QWEN at 1610.
ggerganov/llama.cpp#4281
I have look up the Nomic Vulkan Fork of LLaMa.cpp,
it does have support for Baichuan2 but not QWEN, but GPT4ALL itself does not support Baichuan2.
Motivation
I failed to load baichuan2 and QWEN models, GPT4ALL supposed to be easy to use
Your contribution
Not quite as i am not a programmer but i would look up if that helps
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment