Replies: 2 comments 5 replies
-
I am facing a similar problem when building llama.cpp for CPU in the jetson or nano. Lllama.cpp can not be built for Cortex a-78AE CPU. |
Beta Was this translation helpful? Give feedback.
0 replies
-
I don't have this platform; however, it would be much more helpful if you guys would post the exact commands you used to try to compile and run llama.cpp on your respective platform(s). 🤔 Did you follow the instructions from https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md#cuda
|
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
My nano (8g) is flashed with jetpack6.0 (cuda=12.2, gcc=11.4). When I compile the source code of llama.cpp (with cuda) on orin nano, the following error occurs. Has anyone successfully compiled on nano? Does anyone know how to solve this error?
Beta Was this translation helpful? Give feedback.
All reactions