The information provided here includes things I found helpful for getting started.
This starter is intended as a supplement to the existing ComfyUI README.md, which you can find here: ComfyUI GitHub
It assumes that you have already successfully installed ComfyUI.
- Checkpoint (Base Model): The main AI model containing learned weights (like SD1.5). Start your workflow here.
- LoRA (Low-Rank Adaptation): Small files adjusting the style or subject of your base model. Requires a checkpoint.
- VAE (Variational Autoencoder): Handles image encoding/decoding; affects image quality and colors.
- Workflow: A visual graph of nodes that instruct ComfyUI on generating images.
- Checkpoints: Place
.safetensors
or.ckpt
files inComfyUI/models/checkpoints
. - LoRA: Place LoRA files (
.safetensors
) inComfyUI/models/loras
. - VAE: Optional. Place VAE files (
.safetensors
) inComfyUI/models/vae
.
ComfyUI Manager simplifies managing extensions and nodes.
- To install manually:
git clone https://github.com/ltdrdata/ComfyUI-Manager custom_nodes/comfyui-manager
- Restart ComfyUI, and use the built-in Manager interface to browse and install nodes.
Basic text-to-image workflow:
- Load Checkpoint Node → select your base model.
- CLIP Text Encode Node → input your prompt.
- KSampler Node → generates the latent image.
- VAE Decode Node → converts latents into visible images.
- Save Image Node → saves the generated images.
Using LoRAs:
- Add a Load LoRA Node after your checkpoint.
- Set LoRA strength to control its influence.
- Workflows can be saved/loaded as
.json
files or embedded in image metadata. - Drag workflow images into ComfyUI to instantly load setups.
- Official workflows and examples: ComfyUI GitHub
- Easy setup with built-in dependencies, automatic updates, and a new intuitive UI.
- Includes ComfyUI Manager by default.
- Ideal for beginners: Download here
happy generating!