forked from ggerganov/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'master' into develop/personal
* master: (773 commits) server : add `/detokenize` endpoint (ggerganov#2802) convert.py : advanced option (ggerganov#2753) llama : use Unicode Escape Sequence to replace encoded characters (ggerganov#2814) flake.nix : add rocm support and cleanup (ggerganov#2808) llama : move #includes out of _GNU_SOURCE conditional (ggerganov#2817) main : fix bug (penalize_nl=false doesn't work) + suppress warning on mingw (ggerganov#1528) llama : use std::abs in llama_sample_tail_free (ggerganov#2800) k-quants : remove unnecessary tensor shape restrictions (ggerganov#2811) Better perplexity for 2- and 3-bit quantization for LLaMA-v2-70B (ggerganov#2807) Fix HellaSwag (ggerganov#2805) flake : build llama.cpp on Intel with nix (ggerganov#2795) Handle null rope scaling value (ggerganov#2793) Fix spm whitespaces (ggerganov#2806) examples : skip unnecessary external lib in server README.md how-to (ggerganov#2804) llama : fix struct decl (ggerganov#2790) Faster perplexity computation (ggerganov#2786) llama : add llama_beam_search() (ggerganov#2267) convert.py : Get rope scale from HuggingFace models (ggerganov#2772) llama-bench : add model sizes (ggerganov#2771) convert.py : export rope freq_base when converting CodeLlama from an HF model (ggerganov#2773) ...
- Loading branch information
Showing
210 changed files
with
112,015 additions
and
12,343 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
--- | ||
Checks: > | ||
bugprone-*, | ||
-bugprone-easily-swappable-parameters, | ||
-bugprone-implicit-widening-of-multiplication-result, | ||
-bugprone-narrowing-conversions, | ||
readability-*, | ||
-readability-avoid-unconditional-preprocessor-if, | ||
-readability-function-cognitive-complexity, | ||
-readability-identifier-length, | ||
-readability-implicit-bool-conversion, | ||
-readability-magic-numbers, | ||
-readability-uppercase-literal-suffix, | ||
clang-analyzer-*, | ||
-clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling, | ||
performance-*, | ||
portability-*, | ||
FormatStyle: none |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
ARG UBUNTU_VERSION=22.04 | ||
|
||
# This needs to generally match the container host's environment. | ||
ARG CUDA_VERSION=11.7.1 | ||
|
||
# Target the CUDA build image | ||
ARG BASE_CUDA_DEV_CONTAINER=nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION} | ||
|
||
FROM ${BASE_CUDA_DEV_CONTAINER} as build | ||
|
||
# Unless otherwise specified, we make a fat build. | ||
ARG CUDA_DOCKER_ARCH=all | ||
|
||
RUN apt-get update && \ | ||
apt-get install -y build-essential python3 python3-pip | ||
|
||
COPY requirements.txt requirements.txt | ||
|
||
RUN pip install --upgrade pip setuptools wheel \ | ||
&& pip install -r requirements.txt | ||
|
||
WORKDIR /app | ||
|
||
COPY . . | ||
|
||
# Set nvcc architecture | ||
ENV CUDA_DOCKER_ARCH=${CUDA_DOCKER_ARCH} | ||
# Enable cuBLAS | ||
ENV LLAMA_CUBLAS=1 | ||
|
||
RUN make | ||
|
||
ENTRYPOINT ["/app/.devops/tools.sh"] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
ARG UBUNTU_VERSION=22.04 | ||
|
||
# This needs to generally match the container host's environment. | ||
ARG ROCM_VERSION=5.6 | ||
|
||
# Target the CUDA build image | ||
ARG BASE_ROCM_DEV_CONTAINER=rocm/dev-ubuntu-${UBUNTU_VERSION}:${ROCM_VERSION}-complete | ||
|
||
FROM ${BASE_ROCM_DEV_CONTAINER} as build | ||
|
||
# Unless otherwise specified, we make a fat build. | ||
# List from https://github.com/ggerganov/llama.cpp/pull/1087#issuecomment-1682807878 | ||
# This is mostly tied to rocBLAS supported archs. | ||
ARG ROCM_DOCKER_ARCH=\ | ||
gfx803 \ | ||
gfx900 \ | ||
gfx906 \ | ||
gfx908 \ | ||
gfx90a \ | ||
gfx1010 \ | ||
gfx1030 \ | ||
gfx1100 \ | ||
gfx1101 \ | ||
gfx1102 | ||
|
||
COPY requirements.txt requirements.txt | ||
|
||
RUN pip install --upgrade pip setuptools wheel \ | ||
&& pip install -r requirements.txt | ||
|
||
WORKDIR /app | ||
|
||
COPY . . | ||
|
||
# Set nvcc architecture | ||
ENV GPU_TARGETS=${ROCM_DOCKER_ARCH} | ||
# Enable ROCm | ||
ENV LLAMA_HIPBLAS=1 | ||
ENV CC=/opt/rocm/llvm/bin/clang | ||
ENV CXX=/opt/rocm/llvm/bin/clang++ | ||
|
||
RUN make | ||
|
||
ENTRYPOINT ["/app/.devops/tools.sh"] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
# SRPM for building from source and packaging an RPM for RPM-based distros. | ||
# https://fedoraproject.org/wiki/How_to_create_an_RPM_package | ||
# Built and maintained by John Boero - boeroboy@gmail.com | ||
# In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal | ||
|
||
# Notes for llama.cpp: | ||
# 1. Tags are currently based on hash - which will not sort asciibetically. | ||
# We need to declare standard versioning if people want to sort latest releases. | ||
# 2. Builds for CUDA/OpenCL support are separate, with different depenedencies. | ||
# 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed. | ||
# Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo | ||
# 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries. | ||
# It is up to the user to install the correct vendor-specific support. | ||
|
||
Name: llama.cpp-clblast | ||
Version: master | ||
Release: 1%{?dist} | ||
Summary: OpenCL Inference of LLaMA model in pure C/C++ | ||
License: MIT | ||
Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz | ||
BuildRequires: coreutils make gcc-c++ git mesa-libOpenCL-devel | ||
URL: https://github.com/ggerganov/llama.cpp | ||
|
||
%define debug_package %{nil} | ||
%define source_date_epoch_from_changelog 0 | ||
|
||
%description | ||
CPU inference for Meta's Lllama2 models using default options. | ||
|
||
%prep | ||
%setup -n llama.cpp-master | ||
|
||
%build | ||
make -j LLAMA_CLBLAST=1 | ||
|
||
%install | ||
mkdir -p %{buildroot}%{_bindir}/ | ||
cp -p main %{buildroot}%{_bindir}/llamacppclblast | ||
cp -p server %{buildroot}%{_bindir}/llamacppclblastserver | ||
cp -p simple %{buildroot}%{_bindir}/llamacppclblastsimple | ||
|
||
%clean | ||
rm -rf %{buildroot} | ||
rm -rf %{_builddir}/* | ||
|
||
%files | ||
%{_bindir}/llamacppclblast | ||
%{_bindir}/llamacppclblastserver | ||
%{_bindir}/llamacppclblastsimple | ||
|
||
%pre | ||
|
||
%post | ||
|
||
%preun | ||
%postun | ||
|
||
%changelog |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
# SRPM for building from source and packaging an RPM for RPM-based distros. | ||
# https://fedoraproject.org/wiki/How_to_create_an_RPM_package | ||
# Built and maintained by John Boero - boeroboy@gmail.com | ||
# In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal | ||
|
||
# Notes for llama.cpp: | ||
# 1. Tags are currently based on hash - which will not sort asciibetically. | ||
# We need to declare standard versioning if people want to sort latest releases. | ||
# 2. Builds for CUDA/OpenCL support are separate, with different depenedencies. | ||
# 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed. | ||
# Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo | ||
# 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries. | ||
# It is up to the user to install the correct vendor-specific support. | ||
|
||
Name: llama.cpp-cublas | ||
Version: master | ||
Release: 1%{?dist} | ||
Summary: CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL) | ||
License: MIT | ||
Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz | ||
BuildRequires: coreutils make gcc-c++ git cuda-toolkit | ||
Requires: cuda-toolkit | ||
URL: https://github.com/ggerganov/llama.cpp | ||
|
||
%define debug_package %{nil} | ||
%define source_date_epoch_from_changelog 0 | ||
|
||
%description | ||
CPU inference for Meta's Lllama2 models using default options. | ||
|
||
%prep | ||
%setup -n llama.cpp-master | ||
|
||
%build | ||
make -j LLAMA_CUBLAS=1 | ||
|
||
%install | ||
mkdir -p %{buildroot}%{_bindir}/ | ||
cp -p main %{buildroot}%{_bindir}/llamacppcublas | ||
cp -p server %{buildroot}%{_bindir}/llamacppcublasserver | ||
cp -p simple %{buildroot}%{_bindir}/llamacppcublassimple | ||
|
||
%clean | ||
rm -rf %{buildroot} | ||
rm -rf %{_builddir}/* | ||
|
||
%files | ||
%{_bindir}/llamacppcublas | ||
%{_bindir}/llamacppcublasserver | ||
%{_bindir}/llamacppcublassimple | ||
|
||
%pre | ||
|
||
%post | ||
|
||
%preun | ||
%postun | ||
|
||
%changelog |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
# SRPM for building from source and packaging an RPM for RPM-based distros. | ||
# https://fedoraproject.org/wiki/How_to_create_an_RPM_package | ||
# Built and maintained by John Boero - boeroboy@gmail.com | ||
# In honor of Seth Vidal https://www.redhat.com/it/blog/thank-you-seth-vidal | ||
|
||
# Notes for llama.cpp: | ||
# 1. Tags are currently based on hash - which will not sort asciibetically. | ||
# We need to declare standard versioning if people want to sort latest releases. | ||
# 2. Builds for CUDA/OpenCL support are separate, with different depenedencies. | ||
# 3. NVidia's developer repo must be enabled with nvcc, cublas, clblas, etc installed. | ||
# Example: https://developer.download.nvidia.com/compute/cuda/repos/fedora37/x86_64/cuda-fedora37.repo | ||
# 4. OpenCL/CLBLAST support simply requires the ICD loader and basic opencl libraries. | ||
# It is up to the user to install the correct vendor-specific support. | ||
|
||
Name: llama.cpp | ||
Version: master | ||
Release: 1%{?dist} | ||
Summary: CPU Inference of LLaMA model in pure C/C++ (no CUDA/OpenCL) | ||
License: MIT | ||
Source0: https://github.com/ggerganov/llama.cpp/archive/refs/heads/master.tar.gz | ||
BuildRequires: coreutils make gcc-c++ git | ||
URL: https://github.com/ggerganov/llama.cpp | ||
|
||
%define debug_package %{nil} | ||
%define source_date_epoch_from_changelog 0 | ||
|
||
%description | ||
CPU inference for Meta's Lllama2 models using default options. | ||
|
||
%prep | ||
%autosetup | ||
|
||
%build | ||
make -j | ||
|
||
%install | ||
mkdir -p %{buildroot}%{_bindir}/ | ||
cp -p main %{buildroot}%{_bindir}/llamacpp | ||
cp -p server %{buildroot}%{_bindir}/llamacppserver | ||
cp -p simple %{buildroot}%{_bindir}/llamacppsimple | ||
|
||
%clean | ||
rm -rf %{buildroot} | ||
rm -rf %{_builddir}/* | ||
|
||
%files | ||
%{_bindir}/llamacpp | ||
%{_bindir}/llamacppserver | ||
%{_bindir}/llamacppsimple | ||
|
||
%pre | ||
|
||
%post | ||
|
||
%preun | ||
%postun | ||
|
||
%changelog |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
ARG UBUNTU_VERSION=22.04 | ||
# This needs to generally match the container host's environment. | ||
ARG CUDA_VERSION=11.7.1 | ||
# Target the CUDA build image | ||
ARG BASE_CUDA_DEV_CONTAINER=nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION} | ||
# Target the CUDA runtime image | ||
ARG BASE_CUDA_RUN_CONTAINER=nvidia/cuda:${CUDA_VERSION}-runtime-ubuntu${UBUNTU_VERSION} | ||
|
||
FROM ${BASE_CUDA_DEV_CONTAINER} as build | ||
|
||
# Unless otherwise specified, we make a fat build. | ||
ARG CUDA_DOCKER_ARCH=all | ||
|
||
RUN apt-get update && \ | ||
apt-get install -y build-essential | ||
|
||
WORKDIR /app | ||
|
||
COPY . . | ||
|
||
# Set nvcc architecture | ||
ENV CUDA_DOCKER_ARCH=${CUDA_DOCKER_ARCH} | ||
# Enable cuBLAS | ||
ENV LLAMA_CUBLAS=1 | ||
|
||
RUN make | ||
|
||
FROM ${BASE_CUDA_RUN_CONTAINER} as runtime | ||
|
||
COPY --from=build /app/main /main | ||
|
||
ENTRYPOINT [ "/main" ] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
ARG UBUNTU_VERSION=22.04 | ||
|
||
# This needs to generally match the container host's environment. | ||
ARG ROCM_VERSION=5.6 | ||
|
||
# Target the CUDA build image | ||
ARG BASE_ROCM_DEV_CONTAINER=rocm/dev-ubuntu-${UBUNTU_VERSION}:${ROCM_VERSION}-complete | ||
|
||
FROM ${BASE_ROCM_DEV_CONTAINER} as build | ||
|
||
# Unless otherwise specified, we make a fat build. | ||
# List from https://github.com/ggerganov/llama.cpp/pull/1087#issuecomment-1682807878 | ||
# This is mostly tied to rocBLAS supported archs. | ||
ARG ROCM_DOCKER_ARCH=\ | ||
gfx803 \ | ||
gfx900 \ | ||
gfx906 \ | ||
gfx908 \ | ||
gfx90a \ | ||
gfx1010 \ | ||
gfx1030 \ | ||
gfx1100 \ | ||
gfx1101 \ | ||
gfx1102 | ||
|
||
COPY requirements.txt requirements.txt | ||
|
||
RUN pip install --upgrade pip setuptools wheel \ | ||
&& pip install -r requirements.txt | ||
|
||
WORKDIR /app | ||
|
||
COPY . . | ||
|
||
# Set nvcc architecture | ||
ENV GPU_TARGETS=${ROCM_DOCKER_ARCH} | ||
# Enable ROCm | ||
ENV LLAMA_HIPBLAS=1 | ||
ENV CC=/opt/rocm/llvm/bin/clang | ||
ENV CXX=/opt/rocm/llvm/bin/clang++ | ||
|
||
RUN make | ||
|
||
ENTRYPOINT [ "/app/main" ] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.