Skip to content

How to set to AMD mode? #160

@linonetwo

Description

@linonetwo
PS E:\repo\ComfyUI> ..\stable-diffusion-webui\venv\Scripts\Activate.ps1  # based on sd webui's env, which can runs on AMD card

python main.py # or $env:HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py

Cause the error that cuda not found, while I'm using AMD Rx480 card (poor guy)

Set vram state to: NORMAL VRAM
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
Traceback (most recent call last):
  File "E:\repo\ComfyUI\execution.py", line 174, in execute
    executed += recursive_execute(self.server, prompt, self.outputs, x, extra_data)
  File "E:\repo\ComfyUI\execution.py", line 54, in recursive_execute
    executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
  File "E:\repo\ComfyUI\execution.py", line 54, in recursive_execute
    executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
  File "E:\repo\ComfyUI\execution.py", line 54, in recursive_execute
    executed += recursive_execute(server, prompt, outputs, input_unique_id, extra_data)
  File "E:\repo\ComfyUI\execution.py", line 63, in recursive_execute
    outputs[unique_id] = getattr(obj, obj.FUNCTION)(**input_data_all)
  File "E:\repo\ComfyUI\nodes.py", line 217, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
  File "E:\repo\ComfyUI\comfy\sd.py", line 779, in load_checkpoint_guess_config
    fp16 = model_management.should_use_fp16()
  File "E:\repo\ComfyUI\comfy\model_management.py", line 226, in should_use_fp16
    if torch.cuda.is_bf16_supported():
  File "E:\repo\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 102, in is_bf16_supported
    return torch.cuda.get_device_properties(torch.cuda.current_device()).major >= 8 and cuda_maj_decide
  File "E:\repo\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 552, in current_device
    _lazy_init()
  File "E:\repo\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 229, in _lazy_init
    torch._C._cuda_init()
RuntimeError: The NVIDIA driver on your system is too old (found version 5000). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.

E:\repo\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py:88: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 5000). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0

I have also done the https://github.com/comfyanonymous/ComfyUI#amd-linux-only

P.S.

webui works with .\webui.bat --skip-torch-cuda-test --precision full --no-half that is learnt from https://huggingface.co/CompVis/stable-diffusion-v1-4/discussions/64

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions