Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change fine-tune to use lamma-cpp, after getting llama-cpp to use the GPU for fine tuning #15

Open
earonesty opened this issue Oct 20, 2023 · 0 comments
Labels
bounty Will pay for someone to get this done

Comments

@earonesty
Copy link
Contributor

earonesty commented Oct 20, 2023

this is a big task, but it's important.

pytorch ecosystem is "dependency hell" at best, and rarely works well on other platforms besides linux, especially for tasks with many deps like peft, bitsandbytes

llama cpp ecosystem uses CMAKE and is easy to get to work with linux, windows, max, and even WASM!

we're using pytorch for fine tuning only because llama-cpp doesn't support GPU

the same will apply to stable-diffusion too!

this ticket is for

@earonesty earonesty added the bounty Will pay for someone to get this done label Oct 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bounty Will pay for someone to get this done
Projects
None yet
Development

No branches or pull requests

1 participant