Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support falcon models #775

Closed
bitsnaps opened this issue May 31, 2023 · 2 comments
Closed

Support falcon models #775

bitsnaps opened this issue May 31, 2023 · 2 comments
Labels
backend gpt4all-backend issues enhancement New feature or request

Comments

@bitsnaps
Copy link

Feature request

I'm not sure if it's already there, but since GPT4ll is mentioned at Falcon-7B-Instruct I think it's not gonna be hard to be implemented, right?

Motivation

A better performance, accuracy since falcon is outperforming most other models on NLP benchmarks.

Your contribution

I'm not sure if I can, neither if I have permissions!

@mvenditto
Copy link
Contributor

mvenditto commented May 31, 2023

I may be wrong, but it's probably related to if/when the model will be supported in [llama.cpp](ggerganov/llama.cpp#1602

Update:
Was indeed wrong :)

@AndriyMulyar
Copy link
Contributor

We don't need llama.cpp to support falcon for us to support it. Notice we support MPT and GPTJ :)
It's being worked on.

This was referenced Jun 1, 2023
@niansa niansa added enhancement New feature or request backend gpt4all-backend issues labels Jun 8, 2023
@jacoobes jacoobes closed this as completed Sep 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend gpt4all-backend issues enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants