Open
Description
LocalAI version:
latest, 2.29.0
Environment, CPU architecture, OS, and Version:
Apple silicon M1 Pro and M1 Max
Describe the bug
When trying to use the binary in the release, there are some undocumented requirements that I identified.
To Reproduce
Clean M1 Apple Silicon and use the binary without any other changes
Expected behavior
No errors
A list of requirements to run the binary in Apple Silicon is added to the documentation.
Logs
Can't attach the logs, but they are similar to #5465
Additional context
Requirements identified:
- Due to libraries versions, it only works in latest Apple OS version, Sequoia. If you try to run it in Ventura and with debug enable, you get a message about a library that is compiled using a version that only works in Sequoia version.
- Protobuf libraries are a requirement (install them with brew). If not llama-cpp will not start, and with debug, you see that it lacks this library
- I didn't get this error, but per gpuFillInfo not implemented on darwin #5465, it also needs libutf8
This is probably a duplicate of #3858