Skip to content

openvino-dev-samples/comfyui_openvino

Repository files navigation

🔥OpenVINO Node for ComfyUI🔥

This node is designed for optimizing the performance of model inference in ComfyUI by leveraging Intel OpenVINO toolkits.

comfyui


💻Supported Hardware

This node can support running model on Intel CPU, GPU and NPU device.You can find more detailed informantion in OpenVINO System Requirements.

🚗Install

Prererquisites

The recommended installation method is to use the Comfy Registry.

Comfy Registry

These nodes can be installed via the Comfy Registry.

comfy node registry-install comfyui-openvino

ComfyUI-Manager

This node can be installed via ComfyUI-Manager in the UI or via the CLI:

comfy node install comfyui-openvino

Manual

This node can also be installed manually by copying them into your custom_nodes folder and then installing dependencies:

cd ComfyUI/custom_nodes
git clone https://github.com/openvino-dev-samples/comfyui_openvino 
cd comfyui_openvino
pip install -r requirements.txt

🚀Instruction

To trigger OpenVINO Node for ComfyUI, you can follow the example as reference:

  1. Start a ComfyUI server.

    • lanuch from source:
    cd ComfyUI
    python3 main.py --cpu --use-pytorch-cross-attention
    
    • lanuch from comfy-cli:
    comfy launch -- --cpu --use-pytorch-cross-attention
    
  2. Prepare a standard workflow in ComfyUI.

    Step 1

  3. Add OpenVINO Node.

    Step 2

  4. Connect TorchCompileDiffusionOpenVINO with Diffusion model and TorchCompileVAEOpenVINO with VAE model

    Step 3

  5. Run workflow. Please notice it may need an additional warm-up inference after switching new model.

    Step 3

🤔Q&A

  1. Does it support LoRA loader ?

    Yes, and you can refer the following picture to add it into a workflow of OpenVINO node.

    image

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages