Skip to content

stablediffusion fails to install on DGX Spark #7504

@k3mist

Description

@k3mist

LocalAI version:
sha-ef44ace-nvidia-l4t-arm64-cuda-13

v2.29.0-nvidia-l4t-arm64

Environment, CPU architecture, OS, and Version:
docker

-> % lscpu
Architecture:                aarch64
  CPU op-mode(s):            64-bit
  Byte Order:                Little Endian
CPU(s):                      20
  On-line CPU(s) list:       0-19
Vendor ID:                   ARM
  Model name:                Cortex-X925
    Model:                   1
    Thread(s) per core:      1
    Core(s) per socket:      10
    Socket(s):               1
    Stepping:                r0p1
    CPU(s) scaling MHz:      90%
    CPU max MHz:             4004.0000
    CPU min MHz:             1378.0000
    BogoMIPS:                2000.00
    Flags:                   fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fc
                             ma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm sb paca 
                             pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16
                              i8mm bf16 dgh bti ecv afp wfxt
  Model name:                Cortex-A725
    Model:                   1
    Thread(s) per core:      1
    Core(s) per socket:      10
    Socket(s):               1
    Stepping:                r0p1
    CPU(s) scaling MHz:      86%
    CPU max MHz:             2860.0000
    CPU min MHz:             338.0000
    BogoMIPS:                2000.00
    Flags:                   fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fc
                             ma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm sb paca 
                             pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16
                              i8mm bf16 dgh bti ecv afp wfxt
-> % lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 24.04.3 LTS
Release:        24.04
Codename:       noble
-> % docker exec -it local-ai bash
root@5837be385699:/# nvidia-smi
Wed Dec 10 13:14:41 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05              Driver Version: 580.95.05      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GB10                    On  |   0000000F:01:00.0 Off |                  N/A |
| N/A   39C    P8              4W /  N/A  | Not Supported          |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

Describe the bug
stablediffusion backend fails to install with;

Error installing backend "cuda13-nvidia-l4t-arm64-stablediffusion-ggml": not a valid backend: run file not found "/backends/cuda13-nvidia-l4t-arm64-stablediffusion-ggml/run.sh"

To Reproduce
docker run -ti --name local-ai -p 32000:8080 --gpus all localai/localai:sha-ef44ace-nvidia-l4t-arm64-cuda-13

navigate to the local site > backends > try to install cuda13-nvidia-l4t-arm64-stablediffusion-ggml

Expected behavior
the backend should install

Logs
1:08PM ERR Run file not found runFile=/backends/cuda13-nvidia-l4t-arm64-stablediffusion-ggml/run.sh
1:08PM ERR error installing backend localai@cuda13-nvidia-l4t-arm64-stablediffusion-ggml error="not a valid backend: run file not found "/backends/cuda13-nvidia-l4t-arm64-stablediffusion-ggml/run.sh""

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions