-
Notifications
You must be signed in to change notification settings - Fork 138
Bump llama.cpp to b6002 #786
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump llama.cpp to b6002 #786
Conversation
Signed-off-by: Dennis Keck <26092524+fellhorn@users.noreply.github.com>
|
A prerequisite for multimodal support - a couple of bug fixes have been added in the meantime. |
|
Thanks for the PR, https://github.com/utilityai/llama-cpp-rs/actions/runs/16556434445/job/46873866455?pr=786 should be passing before merge. |
|
merging main may fix. |
|
There is a chicken-egg issue in CI: How was that solved before? I also noticed that |
|
The other failure seems to be a temporary (?) issue when pulling |
Yeah, I don't think we actually need to test publishing llama-cpp-2 (llama-cpp-sys-2 is always the problem one). This has been yet to be solved. I'm currently ignoring it until someone is more bothered than I am. It seems the CUDA images are now behind some auth, which is unfortunate. I'll likely remove those checks - don't worry about those. However, the tests need to pass: https://github.com/utilityai/llama-cpp-rs/actions/runs/16556434445/job/46873866455?pr=786 |
|
The multi-package publishing function for cargo would be needed here. Unfortunately it is still only available in nightly, see rust-lang/cargo#15636
But the
|
|
There are other release tools out there to work around the multi-package publishing issue - or we just wait for |
Bump llama.cpp to b6002.