-
Notifications
You must be signed in to change notification settings - Fork 7
/
Copy pathstdall
143 lines (136 loc) · 13 KB
/
stdall
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
The virtual environment was not created successfully because ensurepip is not
available. On Debian/Ubuntu systems, you need to install the python3-venv
package using the following command.
apt install python3.10-venv
You may need to use sudo with that command. After installing the python3-venv
package, recreate your virtual environment.
Failing command: /mnt/llama.cpp/venv/bin/python3
ci/run.sh: line 630: /mnt/llama.cpp/venv/bin/activate: No such file or directory
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: numpy~=1.24.4 in /home/ggml/.local/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 1)) (1.24.4)
Requirement already satisfied: sentencepiece~=0.1.98 in /home/ggml/.local/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 2)) (0.1.98)
Requirement already satisfied: transformers<5.0.0,>=4.35.2 in /home/ggml/.local/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (4.37.1)
Requirement already satisfied: gguf>=0.1.0 in /home/ggml/.local/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 4)) (0.7.0)
Requirement already satisfied: protobuf<5.0.0,>=4.21.0 in /home/ggml/.local/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 5)) (4.23.4)
Requirement already satisfied: torch~=2.1.1 in /home/ggml/.local/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2.1.2)
Requirement already satisfied: tqdm>=4.27 in /home/ggml/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (4.65.0)
Requirement already satisfied: regex!=2019.12.17 in /home/ggml/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (2023.6.3)
Requirement already satisfied: huggingface-hub<1.0,>=0.19.3 in /home/ggml/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (0.19.4)
Requirement already satisfied: packaging>=20.0 in /home/ggml/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (23.1)
Requirement already satisfied: requests in /home/ggml/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (2.31.0)
Requirement already satisfied: filelock in /home/ggml/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (3.12.2)
Requirement already satisfied: tokenizers<0.19,>=0.14 in /home/ggml/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (0.15.1)
Requirement already satisfied: pyyaml>=5.1 in /usr/lib/python3/dist-packages (from transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (5.4.1)
Requirement already satisfied: safetensors>=0.3.1 in /home/ggml/.local/lib/python3.10/site-packages (from transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (0.4.1)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (11.4.5.107)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (10.3.2.106)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: jinja2 in /usr/lib/python3/dist-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (3.0.3)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (11.0.2.54)
Requirement already satisfied: networkx in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (3.1)
Requirement already satisfied: sympy in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (1.12)
Requirement already satisfied: typing-extensions in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (4.7.1)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.0.106)
Requirement already satisfied: triton==2.1.0 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2.1.0)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: nvidia-nccl-cu12==2.18.1 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2.18.1)
Requirement already satisfied: fsspec in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2023.6.0)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.3.1)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /home/ggml/.local/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (8.9.2.26)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /home/ggml/.local/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.3.101)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/ggml/.local/lib/python3.10/site-packages (from requests->transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (2.1.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests->transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (2020.6.20)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/ggml/.local/lib/python3.10/site-packages (from requests->transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3/dist-packages (from requests->transformers<5.0.0,>=4.35.2->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (3.3)
Requirement already satisfied: mpmath>=0.19 in /home/ggml/.local/lib/python3.10/site-packages (from sympy->torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (1.3.0)
Defaulting to user installation because normal site-packages is not writeable
Obtaining file:///home/ggml/work/llama.cpp/gguf-py
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Checking if build backend supports build_editable: started
Checking if build backend supports build_editable: finished with status 'done'
Getting requirements to build editable: started
Getting requirements to build editable: finished with status 'done'
Preparing editable metadata (pyproject.toml): started
Preparing editable metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: numpy>=1.17 in /home/ggml/.local/lib/python3.10/site-packages (from gguf==0.7.0) (1.24.4)
Building wheels for collected packages: gguf
Building editable for gguf (pyproject.toml): started
Building editable for gguf (pyproject.toml): finished with status 'done'
Created wheel for gguf: filename=gguf-0.7.0-py3-none-any.whl size=3229 sha256=e77d3489e0beea138863799a05abb421092af60b67ffc67b376508ee3f0720bf
Stored in directory: /tmp/pip-ephem-wheel-cache-avw82a6j/wheels/a3/4c/52/c5934ad001d1a70ca5434f11ddc622cad9c0a484e9bf6feda3
Successfully built gguf
Installing collected packages: gguf
Attempting uninstall: gguf
Found existing installation: gguf 0.7.0
Uninstalling gguf-0.7.0:
Successfully uninstalled gguf-0.7.0
Successfully installed gguf-0.7.0
+ gg_run_ctest_debug
+ tee /home/ggml/results/llama.cpp/f5/3119cec4f073b6d214195ecbe1fad3abdf2b34/ggml-4-x86-cuda-v100/ctest_debug.log
+ cd /home/ggml/work/llama.cpp
+ rm -rf build-ci-debug
+ mkdir build-ci-debug
+ cd build-ci-debug
+ set -e
+ tee -a /home/ggml/results/llama.cpp/f5/3119cec4f073b6d214195ecbe1fad3abdf2b34/ggml-4-x86-cuda-v100/ctest_debug-cmake.log
+ cmake -DCMAKE_BUILD_TYPE=Debug -DLLAMA_FATAL_WARNINGS=ON -DLLAMA_CUBLAS=1 ..
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.34.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found CUDAToolkit: /usr/local/cuda-12.2/include (found version "12.2.140")
-- cuBLAS found
-- The CUDA compiler identification is NVIDIA 12.2.140
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda-12.2/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Using CUDA architectures: 52;61;70
-- CUDA host compiler is GNU 11.4.0
-- ccache found, compilation results will be cached. Disable with LLAMA_CCACHE=OFF.
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Configuring done (3.0s)
-- Generating done (0.1s)
-- Build files have been written to: /home/ggml/work/llama.cpp/build-ci-debug
real 0m3.205s
user 0m2.465s
sys 0m0.747s
+ tee -a /home/ggml/results/llama.cpp/f5/3119cec4f073b6d214195ecbe1fad3abdf2b34/ggml-4-x86-cuda-v100/ctest_debug-make.log
+ make -j
[ 1%] Generating build details from Git
[ 2%] Building C object CMakeFiles/ggml.dir/ggml.c.o
[ 4%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o
[ 4%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o
[ 5%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda.cu.o
[ 5%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o
-- Found Git: /usr/bin/git (found version "2.34.1")
[ 6%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o
[ 6%] Built target build_info
nvcc fatal : Value '-use_fast_math' is not defined for option 'Werror'
make[2]: *** [CMakeFiles/ggml.dir/build.make:133: CMakeFiles/ggml.dir/ggml-cuda.cu.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:740: CMakeFiles/ggml.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
real 0m0.313s
user 0m0.310s
sys 0m0.104s
+ cur=2
+ echo 2
+ set +x
cat: /home/ggml/results/llama.cpp/f5/3119cec4f073b6d214195ecbe1fad3abdf2b34/ggml-4-x86-cuda-v100/ctest_debug-ctest.log: No such file or directory