Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault after model load on ROCm multi-gpu, multi-gfx #4030

Closed
4 tasks done
xangelix opened this issue Nov 11, 2023 · 23 comments
Closed
4 tasks done

Segmentation fault after model load on ROCm multi-gpu, multi-gfx #4030

xangelix opened this issue Nov 11, 2023 · 23 comments

Comments

@xangelix
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

NA

Current Behavior

Segmentation fault after model load for ROCm multi-gpu, multi-gfx. Best I can remember it worked a couple months ago, but has now been broken at least 2 weeks.

Environment and Context

  • Physical (or virtual) hardware you are using, e.g. for Linux:
rocminfo
ROCk module is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD Ryzen 9 7950X 16-Core Processor
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD Ryzen 9 7950X 16-Core Processor
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      32768(0x8000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   6021                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            32                                 
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    65539100(0x3e80c1c) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    65539100(0x3e80c1c) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65539100(0x3e80c1c) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx1100                            
  Uuid:                    GPU-28b5961221d81024               
  Marketing Name:          AMD Radeon RX 7900 XTX             
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      32(0x20) KB                        
    L2:                      6144(0x1800) KB                    
    L3:                      98304(0x18000) KB                  
  Chip ID:                 29772(0x744c)                      
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   2526                               
  BDFID:                   768                                
  Internal Node ID:        1                                  
  Compute Unit:            96                                 
  SIMDs per CU:            2                                  
  Shader Engines:          6                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 528                                
  SDMA engine uCode::      19                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    25149440(0x17fc000) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS:                     
      Size:                    25149440(0x17fc000) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1100         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*******                  
Agent 3                  
*******                  
  Name:                    gfx1030                            
  Uuid:                    GPU-8de346d621abe448               
  Marketing Name:          AMD Radeon RX 6900 XT              
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    2                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      16(0x10) KB                        
    L2:                      4096(0x1000) KB                    
    L3:                      131072(0x20000) KB                 
  Chip ID:                 29615(0x73af)                      
  ASIC Revision:           1(0x1)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   2720                               
  BDFID:                   1792                               
  Internal Node ID:        2                                  
  Compute Unit:            80                                 
  SIMDs per CU:            2                                  
  Shader Engines:          4                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 115                                
  SDMA engine uCode::      83                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    16760832(0xffc000) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS:                     
      Size:                    16760832(0xffc000) KB              
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1030         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*** Done ***
lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         48 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  32
  On-line CPU(s) list:   0-31
Vendor ID:               AuthenticAMD
  Model name:            AMD Ryzen 9 7950X 16-Core Processor
    CPU family:          25
    Model:               97
    Thread(s) per core:  2
    Core(s) per socket:  16
    Socket(s):           1
    Stepping:            2
    CPU(s) scaling MHz:  52%
    CPU max MHz:         6021.0000
    CPU min MHz:         400.0000
    BogoMIPS:            9000.59
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf ra
                         pl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perf
                         ctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt cl
                         wb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clea
                         n flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov s
                         uccor smca fsrm flush_l1d
Virtualization features: 
  Virtualization:        AMD-V
Caches (sum of all):     
  L1d:                   512 KiB (16 instances)
  L1i:                   512 KiB (16 instances)
  L2:                    16 MiB (16 instances)
  L3:                    64 MiB (2 instances)
NUMA:                    
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-31
Vulnerabilities:         
  Gather data sampling:  Not affected
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Not affected
  Spec rstack overflow:  Mitigation; safe RET, no microcode
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Enhanced / Automatic IBRS, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
  Srbds:                 Not affected
  Tsx async abort:       Not affected
  • Operating System, e.g. for Linux:
uname -a
Linux dc1a626b91a2 6.5.9-301.fsync.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Oct 28 16:08:46 UTC 2023 x86_64 GNU/Linux
  • SDK version, e.g. for Linux:

ROCm 5.7.1

llamacpp 4a4fd3e

python3 --version
Python 3.11.5
make --version
GNU Make 4.4.1
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
g++ --version
g++ (GCC) 13.2.1 20230801
Copyright (C) 2023 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Failure Information (for bugs)

Provided below.

Steps to Reproduce

make LLAMA_HIPBLAS=1
I llama.cpp build info: 
I UNAME_S:   Linux
I UNAME_P:   unknown
I UNAME_M:   x86_64
I CFLAGS:    -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_HIPBLAS -DGGML_USE_CUBLAS  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wdouble-promotion -pthread -march=native -mtune=native 
I CXXFLAGS:  -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_HIPBLAS -DGGML_USE_CUBLAS  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread  -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -mtune=native 
I NVCCFLAGS:  -I. -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE -DNDEBUG -DGGML_USE_HIPBLAS -DGGML_USE_CUBLAS  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread    -Wno-pedantic -Xcompiler "-Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -mtune=native "
I LDFLAGS:   -L/opt/rocm/lib -Wl,-rpath=/opt/rocm/lib -lhipblas -lamdhip64 -lrocblas 
I CC:        cc (GCC) 13.2.1 20230801
I CXX:       g++ (GCC) 13.2.1 20230801


(Removed build log, no errors)
./main -ngl 99 -m ../koboldcpp/models/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/mistral-7b-instruct-v0.1.Q5_K_M.gguf -mg 0 -p "Write a function in TypeScript that sums numbers"
Log start
main: build = 1503 (4a4fd3e)
main: built with cc (GCC) 13.2.1 20230801 for x86_64-pc-linux-gnu
main: seed  = 1699662201
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 2 ROCm devices:
  Device 0: AMD Radeon RX 7900 XTX, compute capability 11.0
  Device 1: AMD Radeon RX 6900 XT, compute capability 10.3
llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from ../koboldcpp/models/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/mistral-7b-instruct-v0.1.Q5_K_M.gguf (version GGUF V2)
llama_model_loader: - tensor    0:                token_embd.weight q5_K     [  4096, 32000,     1,     1 ]
llama_model_loader: - tensor    1:              blk.0.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    2:              blk.0.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor    3:              blk.0.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor    4:         blk.0.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    5:            blk.0.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor    6:              blk.0.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor    7:            blk.0.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor    8:           blk.0.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor    9:            blk.0.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   10:              blk.1.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   11:              blk.1.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   12:              blk.1.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   13:         blk.1.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   14:            blk.1.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   15:              blk.1.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   16:            blk.1.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   17:           blk.1.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   18:            blk.1.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   19:              blk.2.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   20:              blk.2.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   21:              blk.2.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   22:         blk.2.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   23:            blk.2.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   24:              blk.2.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   25:            blk.2.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   26:           blk.2.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   27:            blk.2.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   28:              blk.3.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   29:              blk.3.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   30:              blk.3.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   31:         blk.3.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   32:            blk.3.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   33:              blk.3.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   34:            blk.3.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   35:           blk.3.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   36:            blk.3.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   37:              blk.4.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   38:              blk.4.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   39:              blk.4.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   40:         blk.4.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   41:            blk.4.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   42:              blk.4.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   43:            blk.4.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   44:           blk.4.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   45:            blk.4.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   46:              blk.5.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   47:              blk.5.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   48:              blk.5.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   49:         blk.5.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   50:            blk.5.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   51:              blk.5.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   52:            blk.5.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   53:           blk.5.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   54:            blk.5.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   55:              blk.6.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   56:              blk.6.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   57:              blk.6.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   58:         blk.6.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   59:            blk.6.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   60:              blk.6.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   61:            blk.6.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   62:           blk.6.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   63:            blk.6.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   64:              blk.7.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   65:              blk.7.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   66:              blk.7.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   67:         blk.7.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   68:            blk.7.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   69:              blk.7.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   70:            blk.7.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   71:           blk.7.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   72:            blk.7.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   73:              blk.8.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   74:              blk.8.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   75:              blk.8.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   76:         blk.8.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   77:            blk.8.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   78:              blk.8.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   79:            blk.8.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   80:           blk.8.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   81:            blk.8.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   82:              blk.9.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   83:              blk.9.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   84:              blk.9.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   85:         blk.9.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   86:            blk.9.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   87:              blk.9.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   88:            blk.9.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   89:           blk.9.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   90:            blk.9.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   91:             blk.10.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   92:             blk.10.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   93:             blk.10.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   94:        blk.10.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   95:           blk.10.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   96:             blk.10.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   97:           blk.10.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   98:          blk.10.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   99:           blk.10.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  100:             blk.11.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  101:             blk.11.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  102:             blk.11.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  103:        blk.11.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  104:           blk.11.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  105:             blk.11.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  106:           blk.11.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  107:          blk.11.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  108:           blk.11.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  109:             blk.12.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  110:             blk.12.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  111:             blk.12.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  112:        blk.12.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  113:           blk.12.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  114:             blk.12.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  115:           blk.12.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  116:          blk.12.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  117:           blk.12.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  118:             blk.13.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  119:             blk.13.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  120:             blk.13.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  121:        blk.13.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  122:           blk.13.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  123:             blk.13.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  124:           blk.13.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  125:          blk.13.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  126:           blk.13.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  127:             blk.14.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  128:             blk.14.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  129:             blk.14.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  130:        blk.14.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  131:           blk.14.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  132:             blk.14.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  133:           blk.14.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  134:          blk.14.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  135:           blk.14.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  136:             blk.15.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  137:             blk.15.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  138:             blk.15.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  139:        blk.15.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  140:           blk.15.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  141:             blk.15.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  142:           blk.15.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  143:          blk.15.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  144:           blk.15.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  145:             blk.16.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  146:             blk.16.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  147:             blk.16.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  148:        blk.16.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  149:           blk.16.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  150:             blk.16.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  151:           blk.16.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  152:          blk.16.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  153:           blk.16.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  154:             blk.17.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  155:             blk.17.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  156:             blk.17.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  157:        blk.17.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  158:           blk.17.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  159:             blk.17.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  160:           blk.17.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  161:          blk.17.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  162:           blk.17.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  163:             blk.18.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  164:             blk.18.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  165:             blk.18.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  166:        blk.18.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  167:           blk.18.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  168:             blk.18.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  169:           blk.18.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  170:          blk.18.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  171:           blk.18.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  172:             blk.19.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  173:             blk.19.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  174:             blk.19.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  175:        blk.19.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  176:           blk.19.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  177:             blk.19.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  178:           blk.19.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  179:          blk.19.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  180:           blk.19.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  181:             blk.20.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  182:             blk.20.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  183:             blk.20.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  184:        blk.20.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  185:           blk.20.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  186:             blk.20.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  187:           blk.20.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  188:          blk.20.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  189:           blk.20.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  190:             blk.21.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  191:             blk.21.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  192:             blk.21.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  193:        blk.21.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  194:           blk.21.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  195:             blk.21.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  196:           blk.21.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  197:          blk.21.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  198:           blk.21.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  199:             blk.22.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  200:             blk.22.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  201:             blk.22.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  202:        blk.22.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  203:           blk.22.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  204:             blk.22.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  205:           blk.22.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  206:          blk.22.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  207:           blk.22.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  208:             blk.23.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  209:             blk.23.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  210:             blk.23.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  211:        blk.23.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  212:           blk.23.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  213:             blk.23.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  214:           blk.23.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  215:          blk.23.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  216:           blk.23.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  217:             blk.24.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  218:             blk.24.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  219:             blk.24.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  220:        blk.24.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  221:           blk.24.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  222:             blk.24.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  223:           blk.24.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  224:          blk.24.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  225:           blk.24.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  226:             blk.25.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  227:             blk.25.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  228:             blk.25.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  229:        blk.25.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  230:           blk.25.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  231:             blk.25.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  232:           blk.25.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  233:          blk.25.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  234:           blk.25.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  235:             blk.26.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  236:             blk.26.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  237:             blk.26.attn_v.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  238:        blk.26.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  239:           blk.26.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  240:             blk.26.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  241:           blk.26.ffn_down.weight q5_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  242:          blk.26.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  243:           blk.26.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  244:             blk.27.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  245:             blk.27.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  246:             blk.27.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  247:        blk.27.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  248:           blk.27.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  249:             blk.27.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  250:           blk.27.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  251:          blk.27.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  252:           blk.27.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  253:             blk.28.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  254:             blk.28.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  255:             blk.28.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  256:        blk.28.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  257:           blk.28.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  258:             blk.28.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  259:           blk.28.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  260:          blk.28.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  261:           blk.28.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  262:             blk.29.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  263:             blk.29.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  264:             blk.29.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  265:        blk.29.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  266:           blk.29.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  267:             blk.29.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  268:           blk.29.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  269:          blk.29.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  270:           blk.29.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  271:             blk.30.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  272:             blk.30.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  273:             blk.30.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  274:        blk.30.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  275:           blk.30.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  276:             blk.30.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  277:           blk.30.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  278:          blk.30.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  279:           blk.30.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  280:             blk.31.attn_q.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  281:             blk.31.attn_k.weight q5_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  282:             blk.31.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  283:        blk.31.attn_output.weight q5_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  284:           blk.31.ffn_gate.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  285:             blk.31.ffn_up.weight q5_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  286:           blk.31.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  287:          blk.31.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  288:           blk.31.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  289:               output_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  290:                    output.weight q6_K     [  4096, 32000,     1,     1 ]
llama_model_loader: - kv   0:                       general.architecture str     
llama_model_loader: - kv   1:                               general.name str     
llama_model_loader: - kv   2:                       llama.context_length u32     
llama_model_loader: - kv   3:                     llama.embedding_length u32     
llama_model_loader: - kv   4:                          llama.block_count u32     
llama_model_loader: - kv   5:                  llama.feed_forward_length u32     
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32     
llama_model_loader: - kv   7:                 llama.attention.head_count u32     
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32     
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32     
llama_model_loader: - kv  10:                       llama.rope.freq_base f32     
llama_model_loader: - kv  11:                          general.file_type u32     
llama_model_loader: - kv  12:                       tokenizer.ggml.model str     
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr     
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr     
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr     
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32     
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32     
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32     
llama_model_loader: - kv  19:               general.quantization_version u32     
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q5_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = mostly Q5_K - Medium
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 4.78 GiB (5.67 BPW) 
llm_load_print_meta: general.name   = mistralai_mistral-7b-instruct-v0.1
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token  = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MB
llm_load_tensors: using ROCm for GPU acceleration
ggml_cuda_set_main_device: using device 0 (AMD Radeon RX 7900 XTX) as main device
llm_load_tensors: mem required  =   86.04 MB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 35/35 layers to GPU
llm_load_tensors: VRAM used: 4807.05 MB
..................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: offloading v cache to GPU
llama_kv_cache_init: offloading k cache to GPU
llama_kv_cache_init: VRAM kv self = 64.00 MB
llama_new_context_with_model: kv self size  =   64.00 MB
llama_build_graph: non-view tensors processed: 740/740
llama_new_context_with_model: compute buffer total size = 79.63 MB
llama_new_context_with_model: VRAM scratch buffer: 73.00 MB
llama_new_context_with_model: total VRAM used: 4944.06 MB (model: 4807.05 MB, context: 137.00 MB)
fish: Job 1, './main -ngl 99 -m ../koboldcpp/…' terminated by signal SIGSEGV (Address boundary error)

Failure Logs

Provided above.

@xangelix
Copy link
Author

Unclear if related to #3991

@shibe2
Copy link
Contributor

shibe2 commented Nov 11, 2023

Make sure that HIP code is compiled for all your architectures. For example:

objcopy --dump-section .hip_fatbin=/dev/stdout main|strings|grep -F amdhsa-

where "main" is file name of executable that crashes. The output should include something like "amdgcn-amd-amdhsa--gfx1100" and "amdgcn-amd-amdhsa--gfx1030".

@xangelix
Copy link
Author

Thanks for the response. Unfortunately, I think it is indeed compiling as expected for both gfx architectures.

❯ objcopy --dump-section .hip_fatbin=/dev/stdout main|strings|grep -F amdhsa-
hipv4-amdgcn-amd-amdhsa--gfx1030
hipv4-amdgcn-amd-amdhsa--gfx1100
amdgcn-amd-amdhsa--gfx1030
amdgcn-amd-amdhsa--gfx1100

@purinda
Copy link

purinda commented Nov 18, 2023

I can confirm I have a similar/same issue with rocm + multi-gpu. Running llamacpp (b83e149)

Running the below cmd and partial output

HIP_VISIBLE_DEVICES="0,1" ./main --main-gpu 1 -ngl 63 -m ~/Storage/TheBloke_Wizard-Vicuna-30B-Uncensored-GGUF/Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Write an essay about galaxies in 1000 words ASSISTANT:"
Log start
main: build = 1522 (b83e149)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed  = 1700270334
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 2 ROCm devices:
  Device 0: AMD Radeon RX 6600 XT, compute capability 10.3
  Device 1: AMD Radeon RX 6900 XT, compute capability 10.3
llama_model_loader: loaded meta data with 19 key-value pairs and 543 tensors from /home/purinda/Storage/TheBloke_Wizard-Vicuna-30B-Uncensored-GGUF/Wizard-Vicuna-30B-Uncensored.Q4_K_M.gguf (version GGUF V2)

...

llm_load_tensors: using ROCm for GPU acceleration
ggml_cuda_set_main_device: using device 1 (AMD Radeon RX 6900 XT) as main device
llm_load_tensors: mem required  =  114.46 MiB
llm_load_tensors: offloading 60 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 63/63 layers to GPU
llm_load_tensors: VRAM used: 18597.20 MiB
....................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: offloading v cache to GPU
llama_kv_cache_init: offloading k cache to GPU
llama_kv_cache_init: VRAM kv self = 780.00 MiB
llama_new_context_with_model: kv self size  =  780.00 MiB
llama_build_graph: non-view tensors processed: 1384/1384
llama_new_context_with_model: compute buffer total size = 98.57 MiB
llama_new_context_with_model: VRAM scratch buffer: 97.00 MiB
llama_new_context_with_model: total VRAM used: 19474.20 MiB (model: 18597.20 MiB, context: 877.00 MiB)
Segmentation fault (core dumped)

rocminfo Output

*******
Agent 2
*******
  Name:                    gfx1030
  Uuid:                    GPU-XX
  Marketing Name:          AMD Radeon RX 6600 XT

...

*******
Agent 3
*******
  Name:                    gfx1030
  Uuid:                    GPU-d376a51a8f6549cb
  Marketing Name:          AMD Radeon RX 6900 XT

llama.cpp compiled with gfx architectures

objcopy --dump-section .hip_fatbin=/dev/stdout main|strings|grep -F amdhsa-
hipv4-amdgcn-amd-amdhsa--gfx1030
hipv4-amdgcn-amd-amdhsa--gfx1032
amdgcn-amd-amdhsa--gfx1030
amdgcn-amd-amdhsa--gfx1032

@ThatDevopsGuy
Copy link

I've encountered the same issue, but noticed this Limitations page on AMD which stipulates that multi-GPU support cannot span multiple PCIe paths, meaning that the GPUs must be connected to the CPU directly, as opposed to the CPU and then Chipset-to-CPU.

@xangelix - You mentioned it was working before (do you know what commit/tag?). Can you confirm if your motherboard's configuration supports multiple GPUs directly connected to the CPU? I'm in the same boat as you (just a generation down), and I think I'm out of luck with my specific motherboard PCIe lane configuration.

@xangelix
Copy link
Author

I've encountered the same issue, but noticed this Limitations page on AMD which stipulates that multi-GPU support cannot span multiple PCIe paths, meaning that the GPUs must be connected to the CPU directly, as opposed to the CPU and then Chipset-to-CPU.

@xangelix - You mentioned it was working before (do you know what commit/tag?). Can you confirm if your motherboard's configuration supports multiple GPUs directly connected to the CPU? I'm in the same boat as you (just a generation down), and I think I'm out of luck with my specific motherboard PCIe lane configuration.

As per https://dlcdnets.asus.com/pub/ASUS/mb/Socket%20AM5/ProArt%20X670E-CREATOR%20WIFI/E21293_ProArt_X670E-CREATOR_WIFI_UM_V2_WEB.pdf?model=ProArt%20X670E-CREATOR%20WIFI (page vii) my motherboard's top two slots, the ones I use for GPUs, are in 8x 8x bifurcation mode which uses lanes directly from the cpu.

I don't at the moment know what commit llamacpp last worked with--but I did remember a few days ago when talking to some koboldcpp folk that it ONLY ever worked for me with the lowvram option, which was removed I believe somewhat recently. I've heard this corroborated by a few other users a while ago, in the koboldai discord, and on this repo as far back as here: #1087 (comment) . If I have time to start somewhere, I'd definitely look for a commit where that option was still available. (the linked MR's merge date as a lower cap and the removal of lowvram as a high cap, to where things might have gone wrong)

I would be suspicious of any AMD support claims both in the negative and positive direction. Don't let it get your hopes down (but maybe don't expect AMD to directly help either...). I'd guess that page has more to do with enterprise support commitment rather than if it should actually function or not. I haven't gotten a single gfx1100 pytorch error since I purchased that card, almost a year before AMD claimed any support at all for it.

@IMbackK
Copy link

IMbackK commented Dec 8, 2023

So i can confirm this bug on an epyc system with MI50s, ie a fully rocm supported configuration. I can also confirm that this configuration passes all the rocblas tests and i run lots of mutli-gpu workloads with no issue.

below is a backtrace, it seams that hipMemcpy2DAsync is somehow missused.

gdb --args ./main -p "hi" -m zephyr-7b-beta.Q4_K_M.gguf -ngl 30 -nommq  
GNU gdb (GDB) 13.2
Copyright (C) 2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./main...
(gdb) r
Starting program: main -p hi -m zephyr-7b-beta.Q4_K_M.gguf -ngl 30 -nommq

This GDB supports auto-downloading debuginfo from the following URLs:
  <https://debuginfod.archlinux.org>
Enable debuginfod for this session? (y or [n]) y
Debuginfod has been enabled.
To make this setting permanent, add 'set debuginfod enabled on' to .gdbinit.
Downloading separate debug info for system-supplied DSO at 0x7ffff7fc8000                                                                                                                                         
Downloading separate debug info for /opt/rocm/lib/libamdhip64.so.5                                                                                                                                                
Downloading separate debug info for /usr/lib/libc.so.6                                                                                                                                                            
[Thread debugging using libthread_db enabled]                                                                                                                                                                     
Using host libthread_db library "/usr/lib/libthread_db.so.1".
Downloading separate debug info for /opt/rocm/lib/libamd_comgr.so.2
Downloading separate debug info for /opt/rocm/hsa/lib/libhsa-runtime64.so.1                                                                                                                                       
Downloading separate debug info for /usr/lib/libnuma.so.1                                                                                                                                                         
Downloading separate debug info for /usr/lib/libz.so.1                                                                                                                                                            
Downloading separate debug info for /usr/lib/libzstd.so.1                                                                                                                                                         
Downloading separate debug info for /usr/lib/libncursesw.so.6                                                                                                                                                     
Downloading separate debug info for /opt/rocm/lib/libhsakmt.so.1                                                                                                                                                  
Downloading separate debug info for /usr/lib/libelf.so.1                                                                                                                                                          
Downloading separate debug info for /usr/lib/libdrm.so.2                                                                                                                                                          
Downloading separate debug info for /usr/lib/libdrm_amdgpu.so.1                                                                                                                                                   
Log start                                                                                                                                                                                                         
main: build = 1620 (fe680e3)
main: built with cc (GCC) 13.2.1 20230801 for x86_64-pc-linux-gnu
main: seed  = 1702046901
[New Thread 0x7ffedcfff6c0 (LWP 544179)]
[New Thread 0x7ffad79ff6c0 (LWP 544180)]
[Thread 0x7ffad79ff6c0 (LWP 544180) exited]
Downloading separate debug info for /opt/rocm/lib/libhsa-amd-aqlprofile64.so
[New Thread 0x7ffad69ff6c0 (LWP 544185)]                                                                                                                                                                          
[Thread 0x7ffad69ff6c0 (LWP 544185) exited]
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 2 ROCm devices:
  Device 0: AMD Instinct MI60 / MI50, compute capability 9.0
  Device 1: AMD Instinct MI60 / MI50, compute capability 9.0
llama_model_loader: loaded meta data with 21 key-value pairs and 291 tensors from zephyr-7b-beta.Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: - tensor    0:                token_embd.weight q4_K     [  4096, 32000,     1,     1 ]
llama_model_loader: - tensor    1:           blk.0.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor    2:            blk.0.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor    3:            blk.0.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor    4:              blk.0.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor    5:            blk.0.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor    6:              blk.0.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor    7:         blk.0.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    8:              blk.0.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    9:              blk.0.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   10:           blk.1.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   11:            blk.1.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   12:            blk.1.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   13:              blk.1.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   14:            blk.1.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   15:              blk.1.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   16:         blk.1.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   17:              blk.1.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   18:              blk.1.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   19:           blk.2.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   20:            blk.2.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   21:            blk.2.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   22:              blk.2.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   23:            blk.2.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   24:              blk.2.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   25:         blk.2.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   26:              blk.2.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   27:              blk.2.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   28:            blk.3.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   29:              blk.3.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   30:              blk.3.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   31:         blk.3.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   32:              blk.3.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   33:              blk.3.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   34:           blk.3.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   35:            blk.3.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   36:            blk.3.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   37:           blk.4.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   38:            blk.4.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   39:            blk.4.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   40:              blk.4.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   41:            blk.4.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   42:              blk.4.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   43:         blk.4.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   44:              blk.4.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   45:              blk.4.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   46:           blk.5.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   47:            blk.5.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   48:            blk.5.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   49:              blk.5.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   50:            blk.5.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   51:              blk.5.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   52:         blk.5.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   53:              blk.5.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   54:              blk.5.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   55:           blk.6.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   56:            blk.6.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   57:            blk.6.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   58:              blk.6.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   59:            blk.6.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   60:              blk.6.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   61:         blk.6.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   62:              blk.6.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   63:              blk.6.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   64:           blk.7.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   65:            blk.7.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   66:            blk.7.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   67:              blk.7.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   68:            blk.7.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   69:              blk.7.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   70:         blk.7.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   71:              blk.7.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   72:              blk.7.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   73:              blk.8.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   74:         blk.8.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   75:              blk.8.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   76:              blk.8.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   77:          blk.10.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   78:           blk.10.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   79:           blk.10.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   80:             blk.10.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   81:           blk.10.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   82:             blk.10.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   83:        blk.10.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   84:             blk.10.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   85:             blk.10.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   86:          blk.11.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   87:           blk.11.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor   88:           blk.11.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   89:             blk.11.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   90:           blk.11.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   91:             blk.11.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   92:        blk.11.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   93:             blk.11.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   94:             blk.11.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   95:           blk.12.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   96:             blk.12.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor   97:             blk.12.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor   98:        blk.12.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   99:             blk.12.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  100:             blk.12.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  101:           blk.8.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  102:            blk.8.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  103:            blk.8.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  104:              blk.8.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  105:            blk.8.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  106:           blk.9.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  107:            blk.9.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  108:            blk.9.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  109:              blk.9.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  110:            blk.9.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  111:              blk.9.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  112:         blk.9.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  113:              blk.9.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  114:              blk.9.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  115:          blk.12.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  116:           blk.12.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  117:           blk.12.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  118:          blk.13.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  119:           blk.13.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  120:           blk.13.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  121:             blk.13.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  122:           blk.13.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  123:             blk.13.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  124:        blk.13.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  125:             blk.13.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  126:             blk.13.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  127:          blk.14.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  128:           blk.14.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  129:           blk.14.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  130:             blk.14.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  131:           blk.14.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  132:             blk.14.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  133:        blk.14.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  134:             blk.14.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  135:             blk.14.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  136:          blk.15.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  137:           blk.15.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  138:           blk.15.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  139:             blk.15.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  140:           blk.15.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  141:             blk.15.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  142:        blk.15.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  143:             blk.15.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  144:             blk.15.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  145:          blk.16.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  146:           blk.16.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  147:           blk.16.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  148:             blk.16.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  149:           blk.16.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  150:             blk.16.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  151:        blk.16.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  152:             blk.16.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  153:             blk.16.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  154:             blk.17.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  155:        blk.17.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  156:             blk.17.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  157:             blk.17.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  158:          blk.17.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  159:           blk.17.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  160:           blk.17.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  161:             blk.17.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  162:           blk.17.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  163:          blk.18.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  164:           blk.18.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  165:           blk.18.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  166:             blk.18.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  167:           blk.18.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  168:             blk.18.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  169:        blk.18.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  170:             blk.18.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  171:             blk.18.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  172:          blk.19.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  173:           blk.19.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  174:           blk.19.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  175:             blk.19.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  176:           blk.19.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  177:             blk.19.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  178:        blk.19.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  179:             blk.19.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  180:             blk.19.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  181:          blk.20.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  182:           blk.20.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  183:           blk.20.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  184:             blk.20.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  185:           blk.20.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  186:             blk.20.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  187:        blk.20.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  188:             blk.20.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  189:             blk.20.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  190:           blk.21.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  191:             blk.21.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  192:             blk.21.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  193:        blk.21.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  194:             blk.21.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  195:             blk.21.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  196:          blk.21.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  197:           blk.21.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  198:           blk.21.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  199:          blk.22.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  200:           blk.22.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  201:           blk.22.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  202:             blk.22.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  203:           blk.22.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  204:             blk.22.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  205:        blk.22.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  206:             blk.22.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  207:             blk.22.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  208:          blk.23.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  209:           blk.23.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  210:           blk.23.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  211:             blk.23.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  212:           blk.23.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  213:             blk.23.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  214:        blk.23.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  215:             blk.23.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  216:             blk.23.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  217:          blk.24.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  218:           blk.24.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  219:           blk.24.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  220:             blk.24.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  221:           blk.24.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  222:             blk.24.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  223:        blk.24.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  224:             blk.24.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  225:             blk.24.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  226:          blk.25.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  227:           blk.25.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  228:           blk.25.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  229:             blk.25.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  230:           blk.25.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  231:             blk.25.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  232:        blk.25.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  233:             blk.25.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  234:             blk.25.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  235:             blk.26.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  236:        blk.26.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  237:             blk.26.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  238:             blk.26.attn_v.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  239:          blk.26.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  240:           blk.26.ffn_down.weight q4_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  241:           blk.26.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  242:             blk.26.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  243:           blk.26.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  244:          blk.27.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  245:           blk.27.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  246:           blk.27.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  247:             blk.27.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  248:           blk.27.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  249:             blk.27.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  250:        blk.27.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  251:             blk.27.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  252:             blk.27.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  253:          blk.28.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  254:           blk.28.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  255:           blk.28.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  256:             blk.28.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  257:           blk.28.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  258:             blk.28.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  259:        blk.28.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  260:             blk.28.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  261:             blk.28.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  262:          blk.29.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  263:           blk.29.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  264:           blk.29.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  265:             blk.29.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  266:           blk.29.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  267:             blk.29.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  268:        blk.29.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  269:             blk.29.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  270:             blk.29.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  271:           blk.30.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  272:             blk.30.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  273:             blk.30.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  274:        blk.30.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  275:             blk.30.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  276:             blk.30.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  277:                    output.weight q6_K     [  4096, 32000,     1,     1 ]
llama_model_loader: - tensor  278:          blk.30.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  279:           blk.30.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  280:           blk.30.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  281:          blk.31.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  282:           blk.31.ffn_down.weight q6_K     [ 14336,  4096,     1,     1 ]
llama_model_loader: - tensor  283:           blk.31.ffn_gate.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  284:             blk.31.ffn_up.weight q4_K     [  4096, 14336,     1,     1 ]
llama_model_loader: - tensor  285:           blk.31.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  286:             blk.31.attn_k.weight q4_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  287:        blk.31.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  288:             blk.31.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  289:             blk.31.attn_v.weight q6_K     [  4096,  1024,     1,     1 ]
llama_model_loader: - tensor  290:               output_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = huggingfaceh4_zephyr-7b-beta
llama_model_loader: - kv   2:                       llama.context_length u32              = 32768
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 15
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  17:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  18:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  19:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = mostly Q4_K - Medium
llm_load_print_meta: model params     = 7.24 B
llm_load_print_meta: model size       = 4.07 GiB (4.83 BPW) 
llm_load_print_meta: general.name     = huggingfaceh4_zephyr-7b-beta
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 2 '</s>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.11 MiB
:1:rocdevice.cpp            :3193: 248853145282 us: [pid:543781 tid:0x7ffff676cc00] hsa_amd_pointer_info() failed
llm_load_tensors: using ROCm for GPU acceleration
llm_load_tensors: mem required  =  437.97 MiB
llm_load_tensors: offloading 30 repeating layers to GPU
llm_load_tensors: offloaded 30/33 layers to GPU
llm_load_tensors: VRAM used: 3727.50 MiB
..................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
:1:rocdevice.cpp            :3193: 248854043387 us: [pid:543781 tid:0x7ffff676cc00] hsa_amd_pointer_info() failed
llama_kv_cache_init: VRAM kv self = 60.00 MB
llama_new_context_with_model: KV self size  =   64.00 MiB, K (f16):   32.00 MiB, V (f16):   32.00 MiB
:1:rocdevice.cpp            :3193: 248854058690 us: [pid:543781 tid:0x7ffff676cc00] hsa_amd_pointer_info() failed
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 76.07 MiB
:1:rocdevice.cpp            :3193: 248854060517 us: [pid:543781 tid:0x7ffff676cc00] hsa_amd_pointer_info() failed
llama_new_context_with_model: VRAM scratch buffer: 73.00 MiB
llama_new_context_with_model: total VRAM used: 3860.50 MiB (model: 3727.50 MiB, context: 133.00 MiB)
[New Thread 0x7ffad69ff6c0 (LWP 544238)]
[New Thread 0x7ff8623ff6c0 (LWP 544239)]
[New Thread 0x7ff861bfe6c0 (LWP 544240)]
[New Thread 0x7ff8613fd6c0 (LWP 544241)]
[New Thread 0x7ff860bfc6c0 (LWP 544242)]
[New Thread 0x7ff8603fb6c0 (LWP 544243)]
[New Thread 0x7ff85fbfa6c0 (LWP 544244)]
[New Thread 0x7ff85f3f96c0 (LWP 544245)]
[New Thread 0x7ff85ebf86c0 (LWP 544246)]
[New Thread 0x7ff85e3f76c0 (LWP 544247)]
[New Thread 0x7ff85dbf66c0 (LWP 544248)]
[New Thread 0x7ff85d3f56c0 (LWP 544249)]
[New Thread 0x7ff85cbf46c0 (LWP 544250)]
[New Thread 0x7ff85c3f36c0 (LWP 544251)]
[New Thread 0x7ff85bbf26c0 (LWP 544252)]
[New Thread 0x7ff85b3f16c0 (LWP 544253)]
[New Thread 0x7ff85abf06c0 (LWP 544254)]
[New Thread 0x7ff85a3ef6c0 (LWP 544255)]
[New Thread 0x7ff859bee6c0 (LWP 544256)]
[New Thread 0x7ff8593ed6c0 (LWP 544257)]
[New Thread 0x7ff858bec6c0 (LWP 544258)]
[New Thread 0x7ff8583eb6c0 (LWP 544259)]
[New Thread 0x7ff857bea6c0 (LWP 544260)]
[New Thread 0x7ff8563ff6c0 (LWP 544262)]
[Thread 0x7ff8563ff6c0 (LWP 544262) exited]
[New Thread 0x7ffacbc7f6c0 (LWP 544263)]

Thread 1 "main" received signal SIGSEGV, Segmentation fault.
roc::Memory::syncCacheFromHost (this=0x0, gpu=..., syncFlags=...) at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/rocclr/device/rocm/rocmemory.cpp:355
Downloading source file /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/rocclr/device/rocm/rocmemory.cpp
355       amd::ScopedLock lock(owner()->lockMemoryOps());                                                                                                                                                         
(gdb) bt
#0  roc::Memory::syncCacheFromHost (this=0x0, gpu=..., syncFlags=...) at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/rocclr/device/rocm/rocmemory.cpp:355
#1  0x00007ffff6afcf0d in roc::VirtualGPU::copyMemory (this=this@entry=0x55556c770040, type=type@entry=4611, srcMem=..., dstMem=..., entire=false, srcOrigin=..., dstOrigin=..., size=..., srcRect=..., 
    dstRect=..., copyMetadata=...) at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/rocclr/device/rocm/rocvirtual.cpp:1795
#2  0x00007ffff6aff94d in roc::VirtualGPU::submitCopyMemory (this=0x55556c770040, cmd=...) at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/rocclr/device/rocm/rocvirtual.cpp:1881
#3  0x00007ffff6ad8931 in amd::Command::enqueue (this=0x55556dc8e520) at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/rocclr/platform/command.cpp:393
#4  0x00007ffff69a3728 in ihipMemcpyCmdEnqueue (isAsync=true, command=<optimized out>) at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/hipamd/src/hip_memory.cpp:2163
#5  ihipMemcpyParam3D (pCopy=pCopy@entry=0x7fffffff9f20, stream=<optimized out>, stream@entry=0x55556c76fbf0, isAsync=isAsync@entry=true)
    at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/hipamd/src/hip_memory.cpp:2268
#6  0x00007ffff69a3847 in ihipMemcpyParam2D (pCopy=pCopy@entry=0x7fffffff9fe0, stream=stream@entry=0x55556c76fbf0, isAsync=isAsync@entry=true)
    at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/hipamd/src/hip_memory.cpp:2277
#7  0x00007ffff69a39ef in ihipMemcpy2D (isAsync=true, stream=0x55556c76fbf0, kind=hipMemcpyDeviceToDevice, height=2, width=2048, spitch=2048, src=0x7ff99f5b0000, dpitch=4096, dst=0x7ff862410960)
    at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/hipamd/src/hip_memory.cpp:2309
#8  hipMemcpy2D_common (isAsync=true, stream=0x55556c76fbf0, kind=hipMemcpyDeviceToDevice, height=2, width=2048, spitch=2048, src=0x7ff99f5b0000, dpitch=4096, dst=0x7ff862410960)
    at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/hipamd/src/hip_memory.cpp:2380
#9  hipMemcpy2D_common (dst=0x7ff862410960, dpitch=4096, src=0x7ff99f5b0000, spitch=2048, width=2048, height=2, kind=hipMemcpyDeviceToDevice, stream=0x55556c76fbf0, isAsync=true)
    at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/hipamd/src/hip_memory.cpp:2366
#10 0x00007ffff69a8070 in hipMemcpy2DAsync (dst=<optimized out>, dpitch=<optimized out>, src=<optimized out>, spitch=<optimized out>, width=<optimized out>, height=<optimized out>, kind=<optimized out>, 
    stream=<optimized out>) at /usr/src/debug/hip-runtime-amd/clr-rocm-5.7.1/hipamd/src/hip_memory.cpp:2402
#11 0x00005555556cc946 in ggml_cuda_op_mul_mat (src0=0x7ffacbda22a0, src1=0x7ff86ba46d70, dst=0x7ff86ba47070, 
    op=0x5555556df010 <ggml_cuda_op_mul_mat_q(ggml_tensor const*, ggml_tensor const*, ggml_tensor*, char const*, float const*, char const*, float*, long, long, long, long, ihipStream_t* const&)>, 
    convert_src1_to_q8_1=true) at ggml-cuda.cu:7626
#12 0x00005555556ba3bc in ggml_cuda_mul_mat (src0=0x7ffacbda22a0, src1=0x7ff86ba46d70, dst=0x7ff86ba47070) at ggml-cuda.cu:8046
#13 0x00005555556b9cc4 in ggml_cuda_compute_forward (params=0x7fffffffaff0, tensor=0x7ff86ba47070) at ggml-cuda.cu:8780
#14 0x00005555555b0c01 in ggml_compute_forward (params=0x7fffffffaff0, tensor=0x7ff86ba47070) at ggml.c:13941
#15 0x00005555555b736d in ggml_graph_compute_thread (data=0x7fffffffb040) at ggml.c:16085
#16 0x00005555555b81b9 in ggml_graph_compute (cgraph=0x7ff86ba00020, cplan=0x7fffffffb360) at ggml.c:16327
#17 0x00005555555c7019 in ggml_graph_compute_helper (buf=std::vector of length 34176, capacity 34176 = {...}, graph=0x7ff86ba00020, n_threads=24) at llama.cpp:676
#18 0x00005555555df2a6 in llama_decode_internal (lctx=..., batch=...) at llama.cpp:5964
#19 0x00005555555ed87b in llama_decode (ctx=0x55556c77c6e0, batch=...) at llama.cpp:9882
#20 0x0000555555669f91 in llama_init_from_gpt_params (params=...) at common/common.cpp:1141
#21 0x000055555556857e in main (argc=8, argv=0x7fffffffd5d8) at examples/main/main.cpp:187

@EmiliaTheGoddess
Copy link

I can confirm this issue for single GPU as well. I have a RX 590(gfx803) only and it's giving the same error as well. This started happening a few weeks ago after a system update. I believe this is ROCm's fault, I even tried a complete reinstall of my system and nothing changed. Other programs with ROCm don't work as well like Koboldcpp, Stable Diffusion, etc.

:1:rocdevice.cpp            :3193: 3363869470 us: [pid:132513 tid:0x7f8310082c00] hsa_amd_pointer_info() failed
llm_load_tensors: using ROCm for GPU acceleration
llm_load_tensors: mem required  = 7024.03 MiB
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/41 layers to GPU
llm_load_tensors: VRAM used: 0.00 MiB
...................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
:1:rocdevice.cpp            :3193: 3401642804 us: [pid:132513 tid:0x7f8310082c00] hsa_amd_pointer_info() failed
llama_new_context_with_model: KV self size  =  400.00 MiB, K (f16):  200.00 MiB, V (f16):  200.00 MiB
:1:rocdevice.cpp            :3193: 3401724417 us: [pid:132513 tid:0x7f8310082c00] hsa_amd_pointer_info() failed
llama_build_graph: non-view tensors processed: 844/844
llama_new_context_with_model: compute buffer total size = 78.19 MiB
:1:rocdevice.cpp            :3193: 3401725757 us: [pid:132513 tid:0x7f8310082c00] hsa_amd_pointer_info() failed
llama_new_context_with_model: VRAM scratch buffer: 75.00 MiB
llama_new_context_with_model: total VRAM used: 75.00 MiB (model: 0.00 MiB, context: 75.00 MiB)

@IMbackK
Copy link

IMbackK commented Dec 19, 2023

thats a totally unrelated issue. this issue affects p2p transfers in llamacpp only (no where else). GFX803 is known broken, it fails a lot of rocm unit tests, the oldest architecture that passes all rocm tests is GFX900

@sroecker
Copy link
Contributor

sroecker commented Dec 20, 2023

I've encountered the same issue, but noticed this Limitations page on AMD which stipulates that multi-GPU support cannot span multiple PCIe paths, meaning that the GPUs must be connected to the CPU directly, as opposed to the CPU and then Chipset-to-CPU.
@xangelix - You mentioned it was working before (do you know what commit/tag?). Can you confirm if your motherboard's configuration supports multiple GPUs directly connected to the CPU? I'm in the same boat as you (just a generation down), and I think I'm out of luck with my specific motherboard PCIe lane configuration.

As per https://dlcdnets.asus.com/pub/ASUS/mb/Socket%20AM5/ProArt%20X670E-CREATOR%20WIFI/E21293_ProArt_X670E-CREATOR_WIFI_UM_V2_WEB.pdf?model=ProArt%20X670E-CREATOR%20WIFI (page vii) my motherboard's top two slots, the ones I use for GPUs, are in 8x 8x bifurcation mode which uses lanes directly from the cpu.

I don't at the moment know what commit llamacpp last worked with--but I did remember a few days ago when talking to some koboldcpp folk that it ONLY ever worked for me with the lowvram option, which was removed I believe somewhat recently. I've heard this corroborated by a few other users a while ago, in the koboldai discord, and on this repo as far back as here: #1087 (comment) . If I have time to start somewhere, I'd definitely look for a commit where that option was still available. (the linked MR's merge date as a lower cap and the removal of lowvram as a high cap, to where things might have gone wrong)

I would be suspicious of any AMD support claims both in the negative and positive direction. Don't let it get your hopes down (but maybe don't expect AMD to directly help either...). I'd guess that page has more to do with enterprise support commitment rather than if it should actually function or not. I haven't gotten a single gfx1100 pytorch error since I purchased that card, almost a year before AMD claimed any support at all for it.

Man, I should have read your comment more careful. I just bisected back to b1060 without success apart from getting it to run on one GPU with HIP_VISIBLE_DEVICES=0 in a two GPU setup (6600M, both PCIe 3.0 x16, Fedora rawhide, ROCm 6.0 admgpu-install). I just got these cards for my old desktop so I don't have much recent AMD experience.
The low-vram option was disabled in b1289, I just tried it with b1288 and I can run mistral 7b instruct v0.2 on one GPU without any AMD enviroment variable apart from using the -lv option!
I can even run theknium-OpenHermes-13B.Q6_K on two gpus with the low-vram option now:

llm_load_tensors: ggml ctx size =    0.12 MB
llm_load_tensors: using ROCm for GPU acceleration
ggml_cuda_set_main_device: using device 0 (AMD Radeon RX 6600M) as main device
llm_load_tensors: mem required  =  128.31 MB (+  400.00 MB per state)
llm_load_tensors: offloading 40 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: cannot offload v cache to GPU due to low VRAM option
llm_load_tensors: cannot offload k cache to GPU due to low VRAM option
llm_load_tensors: offloaded 41/43 layers to GPU
llm_load_tensors: VRAM used: 10056 MB
...................................................................................................
llama_new_context_with_model: kv self size  =  400.00 MB
llama_new_context_with_model: compute buffer total size =   80.88 MB
llama_new_context_with_model: not allocating a VRAM scratch buffer due to low VRAM option

system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 512, n_batch = 512, n_predict = 128, n_keep = 0


 The meaning of life is a philosophical question that has been discussed by great minds throughout history. Here are some of the most insightful, thought-provoking and inspirational quotes about the meaning of life that have been shared through time.
1. “The only way to find true happiness is to risk being completely cut open.” ― Chuck Palahniuk, Fight Club
2. “Try to understand men, if you understand each other you will be kind to each other. Knowing a man well never leads to hate and contempt. A great gift within us is the ability to understand each other.” ― Leo Tolstoy,
llama_print_timings:        load time =  2428.81 ms
llama_print_timings:      sample time =    83.74 ms /   128 runs   (    0.65 ms per token,  1528.50 tokens per second)
llama_print_timings: prompt eval time =   223.47 ms /     6 tokens (   37.24 ms per token,    26.85 tokens per second)
llama_print_timings:        eval time =  9679.43 ms /   127 runs   (   76.22 ms per token,    13.12 tokens per second)
llama_print_timings:       total time = 10035.06 ms

So is there any way to get --low-vram back, or fix it any other way? Happy to help.

@sroecker
Copy link
Contributor

Difference between the one GPU (HIP_VISIBLE_DEVICES=0) and the --low-vram option is:

lm_load_tensors: using ROCm for GPU acceleration
llm_load_tensors: mem required  =  132.91 MB (+   64.00 MB per state)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloading v cache to GPU
llm_load_tensors: offloading k cache to GPU
llm_load_tensors: offloaded 35/35 layers to GPU
llm_load_tensors: VRAM used: 7270 MB
...................................................................................................
llama_new_context_with_model: kv self size  =   64.00 MB
llama_new_context_with_model: compute buffer total size =   78.88 MB
llama_new_context_with_model: VRAM scratch buffer: 73.00 MB
llama_print_timings:        eval time =  4960.56 ms /   127 runs   (   39.06 ms per token,    25.60 tokens per second)

vs

llm_load_tensors: using ROCm for GPU acceleration
ggml_cuda_set_main_device: using device 0 (AMD Radeon RX 6600M) as main device
llm_load_tensors: mem required  =  132.92 MB (+   64.00 MB per state)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: cannot offload v cache to GPU due to low VRAM option
llm_load_tensors: cannot offload k cache to GPU due to low VRAM option
llm_load_tensors: offloaded 33/35 layers to GPU
llm_load_tensors: VRAM used: 7206 MB
...................................................................................................
llama_new_context_with_model: kv self size  =   64.00 MB
llama_new_context_with_model: compute buffer total size =   78.88 MB
llama_new_context_with_model: not allocating a VRAM scratch buffer due to low VRAM option
llama_print_timings:        eval time =  6103.21 ms /   127 runs   (   48.06 ms per token,    20.81 tokens per second)

So just around 5 tokens per second difference.

@slaren
Copy link
Collaborator

slaren commented Dec 20, 2023

The reason why --low-ram was removed is because you can get very similar VRAM usage by reducing the batch size and disabling KV offloading, ie. with --no-kv-offload -b 1. I am not sure if it also happened to workaround some issue with ROCm, but that needs to be fixed separately.

@sroecker
Copy link
Contributor

The reason why --low-ram was removed is because you can get very similar VRAM usage by reducing the batch size and disabling KV offloading, ie. with --no-kv-offload -b 1. I am not sure if it also happened to workaround some issue with ROCm, but that needs to be fixed separately.

Seems like somehow it does workaround a ROCm issue. I just tested it on head and it segfaulted:

llm_load_tensors: ggml ctx size =    0.11 MiB
llm_load_tensors: using ROCm for GPU acceleration
llm_load_tensors: mem required  =  132.92 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: VRAM used: 7205.83 MiB
...................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: KV self size  =   64.00 MiB, K (f16):   32.00 MiB, V (f16):   32.00 MiB
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 3.33 MiB
llama_new_context_with_model: VRAM scratch buffer: 0.14 MiB
llama_new_context_with_model: total VRAM used: 7205.97 MiB (model: 7205.83 MiB, context: 0.14 MiB)
Segmentation fault (core dumped)

@sroecker
Copy link
Contributor

According to AMD_LOG_LEVEL=3 that's the culprit:

:3:hip_module.cpp           :668 : 19010305986 us: [pid:16107 tid:0x7f5ad38adc00]  hipLaunchKernel ( 0x607820, {7168,1,1}, {32,1,1}, 0x7ffc284bcb80, 0, stream:0x1e99b70 ) 
:3:rocvirtual.cpp           :709 : 19010305991 us: [pid:16107 tid:0x7f5ad38adc00] Arg0:   = ptr:0x7f55e4600000 obj:[0x7f55e4600000-0x7f55e63c0000]
:3:rocvirtual.cpp           :709 : 19010305995 us: [pid:16107 tid:0x7f5ad38adc00] Arg1:   = ptr:0x7f55da32f000 obj:[0x7f55da32f000-0x7f55da331200]
:3:rocvirtual.cpp           :709 : 19010305999 us: [pid:16107 tid:0x7f5ad38adc00] Arg2:   = ptr:0x7f55da30c0e0 obj:[0x7f55da304000-0x7f55da328880]
:3:rocvirtual.cpp           :784 : 19010306005 us: [pid:16107 tid:0x7f5ad38adc00] Arg3:   = val:4096
:3:rocvirtual.cpp           :784 : 19010306008 us: [pid:16107 tid:0x7f5ad38adc00] Arg4:   = val:7168
:3:rocvirtual.cpp           :2925: 19010306010 us: [pid:16107 tid:0x7f5ad38adc00] ShaderName : _ZL13mul_mat_vec_qILi32ELi8E10block_q8_0Li2EXadL_ZL17vec_dot_q8_0_q8_1PKvPK10block_q8_1RKiEEEvS2_S2_Pfii
:3:hip_module.cpp           :669 : 19010306014 us: [pid:16107 tid:0x7f5ad38adc00] hipLaunchKernel: Returned hipSuccess : 
:3:hip_error.cpp            :35  : 19010306017 us: [pid:16107 tid:0x7f5ad38adc00]  hipGetLastError (  ) 
:3:hip_device_runtime.cpp   :622 : 19010306020 us: [pid:16107 tid:0x7f5ad38adc00]  hipGetDevice ( 0x7ffc284bcc30 ) 
:3:hip_device_runtime.cpp   :630 : 19010306023 us: [pid:16107 tid:0x7f5ad38adc00] hipGetDevice: Returned hipSuccess : 
:3:hip_device_runtime.cpp   :652 : 19010306025 us: [pid:16107 tid:0x7f5ad38adc00]  hipSetDevice ( 1 ) 
:3:hip_device_runtime.cpp   :656 : 19010306028 us: [pid:16107 tid:0x7f5ad38adc00] hipSetDevice: Returned hipSuccess : 
:3:hip_stream.cpp           :555 : 19010306032 us: [pid:16107 tid:0x7f5ad38adc00]  hipStreamWaitEvent ( stream:0x12af81f0, event:0x12a3cc60, 0 ) 
:3:hip_stream.cpp           :556 : 19010306036 us: [pid:16107 tid:0x7f5ad38adc00] hipStreamWaitEvent: Returned hipSuccess : 
:3:hip_memory.cpp           :1475: 19010306040 us: [pid:16107 tid:0x7f5ad38adc00]  hipMemcpyAsync ( 0x7f53cd205000, 0x7f55da32f000, 4608, hipMemcpyDeviceToDevice, stream:0x12af81f0 ) 
:3:rocvirtual.hpp           :66  : 19010306047 us: [pid:16107 tid:0x7f5ad38adc00] Host active wait for Signal = (0x7f59e31ea080) for -1 ns
:3:rocvirtual.hpp           :66  : 19010306054 us: [pid:16107 tid:0x7f5ad38adc00] Host active wait for Signal = (0x7f59e3169d00) for -1 ns
:3:hip_memory.cpp           :1476: 19010306190 us: [pid:16107 tid:0x7f5ad38adc00] hipMemcpyAsync: Returned hipSuccess : : duration: 150 us
:3:hip_platform.cpp         :193 : 19010306196 us: [pid:16107 tid:0x7f5ad38adc00]  __hipPushCallConfiguration ( {7168,1,1}, {32,1,1}, 0, stream:0x12af81f0 ) 
:3:hip_platform.cpp         :197 : 19010306201 us: [pid:16107 tid:0x7f5ad38adc00] __hipPushCallConfiguration: Returned hipSuccess : 
:3:hip_platform.cpp         :202 : 19010306205 us: [pid:16107 tid:0x7f5ad38adc00]  __hipPopCallConfiguration ( {7168,1,1}, {32,1,1}, 0x7ffc284bcb40, 0x7ffc284bcb38 ) 
:3:hip_platform.cpp         :211 : 19010306209 us: [pid:16107 tid:0x7f5ad38adc00] __hipPopCallConfiguration: Returned hipSuccess : 
:3:hip_module.cpp           :668 : 19010306214 us: [pid:16107 tid:0x7f5ad38adc00]  hipLaunchKernel ( 0x607820, {7168,1,1}, {32,1,1}, 0x7ffc284bcb80, 0, stream:0x12af81f0 ) 
:3:rocvirtual.cpp           :709 : 19010306220 us: [pid:16107 tid:0x7f5ad38adc00] Arg0:   = ptr:0x7f55e2600000 obj:[0x7f55e2600000-0x7f55e43c0000]
:3:rocvirtual.cpp           :709 : 19010306223 us: [pid:16107 tid:0x7f5ad38adc00] Arg1:   = ptr:0x7f53cd205000 obj:[0x7f53cd205000-0x7f53cd206300]
:3:rocvirtual.cpp           :709 : 19010306227 us: [pid:16107 tid:0x7f5ad38adc00] Arg2:   = ptr:0x7f53cd20b000 obj:[0x7f53cd20b000-0x7f53cd212600]
:3:rocvirtual.cpp           :784 : 19010306231 us: [pid:16107 tid:0x7f5ad38adc00] Arg3:   = val:4096
:3:rocvirtual.cpp           :784 : 19010306237 us: [pid:16107 tid:0x7f5ad38adc00] Arg4:   = val:7168
:3:rocvirtual.cpp           :2925: 19010306240 us: [pid:16107 tid:0x7f5ad38adc00] ShaderName : _ZL13mul_mat_vec_qILi32ELi8E10block_q8_0Li2EXadL_ZL17vec_dot_q8_0_q8_1PKvPK10block_q8_1RKiEEEvS2_S2_Pfii
:3:hip_module.cpp           :669 : 19010306244 us: [pid:16107 tid:0x7f5ad38adc00] hipLaunchKernel: Returned hipSuccess : 
:3:hip_error.cpp            :35  : 19010306247 us: [pid:16107 tid:0x7f5ad38adc00]  hipGetLastError (  ) 
:3:hip_memory.cpp           :2442: 19010306251 us: [pid:16107 tid:0x7f5ad38adc00]  hipMemcpy2DAsync ( 0x7f55da3130e0, 57344, 0x7f53cd20b000, 28672, 28672, 1, hipMemcpyDeviceToDevice, stream:0x12af81f0 ) 

@sroecker
Copy link
Contributor

Apologize, it was cut off. I attached full log where you can see it is switching devices, even though the 7B Q8 model can fit into VRAM of one GPU just fine.
multi_gpu_fail_mistral7b.txt

@userbox020
Copy link

sup guy, I also getting the segmentation fault when using my two rx 6800.
What version of llamacpp i must compile to enable again the --low-ram flag and have multi gpu support?
also im using ooba for testing my llms and ooba uses llama-cpp-python, what more should i need to modify other than compile the older version?

@ghost
Copy link

ghost commented Jan 12, 2024

Recently upgraded an old working version of llama.cpp where x2 GPUs worked flawlessly with ROCM.

I'm unable to use llama.cpp at all now. Other loaders seem to work fine so it is likely a problem with llama.cpp and not Rocm (6.0).

@sroecker
Copy link
Contributor

sroecker commented Jan 28, 2024 via email

@userbox020
Copy link

Recently upgraded an old working version of llama.cpp where x2 GPUs worked flawlessly with ROCM.

I'm unable to use llama.cpp at all now. Other loaders seem to work fine so it is likely a problem with llama.cpp and not Rocm (6.0).

Sup bro, try the latest llamacpp version its working great now with multigpu, it even works with rocm6.
Im working with 2 rx6800 and the problem im facing is that if i add my rx6700 i get the follow inference error

CUDA error: shared object initialization failed
  current device: 0, in function ggml_cuda_op_flatten at ggml-cuda.cu:9181
  hipGetLastError()
GGML_ASSERT: ggml-cuda.cu:241: !"CUDA error"
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
Aborted (core dumped)

Not sure what do i need to do to solve it, im going to start reading to see what can i find

@userbox020
Copy link

Sup bros, I installed rocm5.6 and did the follow compiling the latest llamacpp

make -j16 LLAMA_HIPBLAS=1 LLAMA_HIP_UMA=1 AMDGPU_TARGETS=gxf1030

then i set the follow enviromental variables

export ROCM_PATH=/opt/rocm
export HCC_AMDGPU_TARGET=gfx1030
export HSA_OVERRIDE_GFX_VERSION=10.3.0

Now im able to run one rx 6900, two rx 6800 and one rx 6700 all together in multigpu. Everything working great now! but im using an old mobo with pcie x1 gen1 and take long time to load models, but when loaded the inference time its fast.

I have an extra rx 5700 and im wondering how to add it to make it work with my setup, any ideas?

@userbox020
Copy link

@xangelix bro confirm if my steps above works for you so we can close the issue

@userbox020
Copy link

image

@github-actions github-actions bot added the stale label Mar 19, 2024
Copy link
Contributor

github-actions bot commented Apr 2, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants