Closed
Description
I downloaded the latest master 004797f, but when I try to compile it, I encounter the following error.
Console output:
root@imperial-f87d1190ac-3628b555:~/llama.cpp-master/build# cmake .. -DLLAMA_CUBLAS=ON
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
CMake Warning at CMakeLists.txt:133 (message):
Git repository not found; to enable automatic generation of build info,
make sure Git is installed and the project is a Git repository.
-- Found CUDAToolkit: /usr/local/cuda/targets/x86_64-linux/include (found version "11.3.109")
-- cuBLAS found
-- The CUDA compiler identification is NVIDIA 11.3.109
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Using CUDA architectures: 52;61;70
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Configuring done (2.9s)
-- Generating done (0.2s)
-- Build files have been written to: /root/llama.cpp-master/build
root@imperial-f87d1190ac-3628b555:~/llama.cpp-master/build# cmake --build . --config Release
[ 1%] Building C object CMakeFiles/ggml.dir/ggml.c.o
[ 2%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o
[ 3%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o
[ 4%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda.cu.o
/root/llama.cpp-master/ggml-cuda.cu: In function 'void ggml_cuda_op_clamp(const ggml_tensor*, const ggml_tensor*, ggml_tensor*, const float*, const float*, float*, CUstream_st* const&)':
/root/llama.cpp-master/ggml-cuda.cu:6525:20: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
6525 | const float min = ((float *) dst->op_params)[0];
| ~^~~~~~~~~~~~~~~~~~~~~~~~~~
[ 5%] Building C object CMakeFiles/ggml.dir/k_quants.c.o
[ 5%] Built target ggml
[ 6%] Linking CUDA static library libggml_static.a
[ 6%] Built target ggml_static
[ 7%] Building CXX object CMakeFiles/llama.dir/llama.cpp.o
[ 8%] Linking CXX static library libllama.a
[ 8%] Built target llama
[ 10%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o
/root/llama.cpp-master/common/common.cpp: In function 'void dump_non_result_info_yaml(FILE*, const gpt_params&, const llama_context*, const string&, const std::vector<int>&, const char*)':
/root/llama.cpp-master/common/common.cpp:1129:55: error: expected primary-expression before ')' token
1129 | fprintf(stream, "build_number: %d\n", BUILD_NUMBER);
| ^
make[2]: *** [common/CMakeFiles/common.dir/build.make:76: common/CMakeFiles/common.dir/common.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:1429: common/CMakeFiles/common.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
lscpu
root@imperial-f87d1190ac-3628b555:~/llama.cpp-master/build# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 56
On-line CPU(s) list: 0-55
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
Stepping: 1
CPU MHz: 1200.362
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4800.45
Virtualization: VT-x
L1d cache: 896 KiB
L1i cache: 896 KiB
L2 cache: 7 MiB
L3 cache: 70 MiB
NUMA node0 CPU(s): 0-13,28-41
NUMA node1 CPU(s): 14-27,42-55
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pe
bs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe
popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi f
lexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_
local dtherm ida arat pln pts md_clear flush_l1d
nvidia-smi
root@imperial-f87d1190ac-3628b555:~/llama.cpp-master/build# nvidia-smi
Thu Oct 19 00:25:12 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.44 Driver Version: 495.44 CUDA Version: 11.5 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA TITAN Xp On | 00000000:85:00.0 Off | N/A |
| 23% 22C P8 8W / 250W | 0MiB / 12196MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+