-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
[WIP] feat: build llama cpp externally #5790
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for localai ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
4980a37
to
608264c
Compare
d1569f2
to
f3b1c38
Compare
So a completely separate Dockerfile and Makefile? This will be a major improvement! |
yup! my plan is to isolate everything, one backend at a time. Currently the llama.cpp one is the most heavy, having also lots of specific code in the golang part - ideally I want to get rid of all of the specific llama.cpp code and the binary bundling bits out of the main code. This is how I'm testing things now with #5816 in:
|
f3b1c38
to
5885711
Compare
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Description
This PR fixes #
Notes for Reviewers
Signed commits