-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Description
I want to use illustrious model on gfx1650. gfx1650 have bugs in fp16 and can only use fp32 which is too slow, maybe int8 can run faster.
I convert and test model on my 780M(gfx1103).
I have tried convert_to_quant --int8 --block_size 128 --comfy_quant --simple --nerf_large -i oneObsession_v19.safetensors and use Load Checkpoint (Quantized), error reports Weight scale shape mismatch: scale.shape=torch.Size([]), expected (10, 2). If I use ComfyUI-FeatherOps and use build-in Load Checkpoint, it generates black image. If I only convert diffusion model and use build-in Load Diffusion Model, it generates error image like this.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels
