We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Falcon" is a new Large Language Model which seems to be better than Llama. See https://falconllm.tii.ae/ and https://iamgeekydude.com/2023/05/28/falcon-llm-the-40-billion-parameters-llm/ and https://www.marktechpost.com/2023/05/28/technology-innovation-institute-open-sourced-falcon-llms-a-new-ai-model-that-uses-only-75-percent-of-gpt-3s-training-compute-40-percent-of-chinchillas-and-80-percent-of-palm-62b/
Actually, it is the best open-source model currently available according to the authors.
Model (for Huggingface Transformers library) with 40B and 7B parameters is available at : https://huggingface.co/tiiuae/falcon-40b
Would be great if it would be supported also in llama.cpp. Note it uses some novel layers (FlashAttention, Multiquery).
The text was updated successfully, but these errors were encountered:
duplicate of #1602
Sorry, something went wrong.
No branches or pull requests
"Falcon" is a new Large Language Model which seems to be better than Llama.
See https://falconllm.tii.ae/ and
https://iamgeekydude.com/2023/05/28/falcon-llm-the-40-billion-parameters-llm/ and
https://www.marktechpost.com/2023/05/28/technology-innovation-institute-open-sourced-falcon-llms-a-new-ai-model-that-uses-only-75-percent-of-gpt-3s-training-compute-40-percent-of-chinchillas-and-80-percent-of-palm-62b/
Actually, it is the best open-source model currently available according to the authors.
Model (for Huggingface Transformers library) with 40B and 7B parameters is available at :
https://huggingface.co/tiiuae/falcon-40b
Would be great if it would be supported also in llama.cpp.
Note it uses some novel layers (FlashAttention, Multiquery).
The text was updated successfully, but these errors were encountered: