Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update llama.cpp integration #11864

Merged
merged 6 commits into from
Nov 1, 2023
Merged
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 8 additions & 4 deletions docs/docs/integrations/llms/llamacpp.ipynb
ElliotKetchup marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@
"source": [
"# Llama.cpp\n",
"\n",
"[llama-cpp-python](https://github.com/abetlen/llama-cpp-python) is a Python binding for [llama.cpp](https://github.com/ggerganov/llama.cpp). \n",
"[llama-cpp-python](https://github.com/abetlen/llama-cpp-python) is a Python binding for [llama.cpp](https://github.com/ggerganov/llama.cpp).\n",
"\n",
"It supports inference for [many LLMs](https://github.com/ggerganov/llama.cpp), which can be accessed on [HuggingFace](https://huggingface.co/TheBloke).\n",
"It supports inference for Meta's [LLaMA](https://github.com/facebookresearch/llama/) models, which can be accessed on [HuggingFace](https://huggingface.co/TheBloke).\n",
ElliotKetchup marked this conversation as resolved.
Show resolved Hide resolved
"\n",
"This notebook goes over how to run `llama-cpp-python` within LangChain.\n",
"\n",
Expand Down Expand Up @@ -54,7 +54,7 @@
"source": [
"### Installation with OpenBLAS / cuBLAS / CLBlast\n",
"\n",
"`lama.cpp` supports multiple BLAS backends for faster processing. Use the `FORCE_CMAKE=1` environment variable to force the use of cmake and install the pip package for the desired BLAS backend ([source](https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast)).\n",
"`llama.cpp` supports multiple BLAS backends for faster processing. Use the `FORCE_CMAKE=1` environment variable to force the use of cmake and install the pip package for the desired BLAS backend ([source](https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast)).\n",
"\n",
"Example installation with cuBLAS backend:"
]
Expand Down Expand Up @@ -177,7 +177,11 @@
"\n",
"You don't need an `API_TOKEN` as you will run the LLM locally.\n",
"\n",
"It is worth understanding which models are suitable to be used on the desired machine."
"It is worth understanding which models are suitable to be used on the desired machine.\n",
"\n",
"[TheBloke's](https://huggingface.co/TheBloke) Hugging Face models have a `Provided files` section that exposes the RAM required to run models of different quantisation sizes and methods (eg: [Llama2-7B-Chat-GGUF](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF#provided-files)).\n",
"\n",
"This [github issue](https://github.com/facebookresearch/llama/issues/425) is also relevant to find the right model for your machine."
]
},
{
Expand Down
Loading